The client is a leading law firm with an obligation to verify client signatures on all documents. The task was labor-intensive, time-consuming and subject to human error given signatures can evolve.
The procedure required manually inspecting client signature cards and identification documents, and comparing them with signed legal documents.
Vacon developed an AI tool to identify and extract all signatures from documents to compare to a signature library of clients.
How it works:
Using Amazon Textract, signatures are extracted from client signature files, identification, and signed documents. Signature extraction includes connected components from an image file.
A PyTorch model classifies signature images to accommodate natural signature variations in width and height ratios. The confidence of the trained model helped remove false positives.
90% accuracy of document signature verification within seconds.
Seconds to verify signatures.
Instant detection of signature fraud.
90% reduction in administrative resources for document authenticity.
Social sentiment moves at the speed of social media – streaming by the second. During COVID-19 the world came closer together, yet opinions were divided online.
The client wanted to gauge social sentiment on COVID-19 to provide data-driven insights to help align product offerings to audiences segmented by sentiment and create targeted result-driven campaigns.
Vacon created a dashboard showing real-time (hourly) analytics on COVID tweets and related sentiment. The real-time dashboard showed the number of positive, neutral, and negative COVID tweets upon the hour.
Word cloud sentiments showcased dominant words being used on Twitter.
How it works:
The Python-based script scans Twitter every hour, scraping tweets mentioning COVID.
The tweet data was cleaned from stop words and other text, including emojis.
After data cleaning, the text classifier was built using tf-idf methods, in combination with an SVM Classifier.
Once the training was complete, an API was built to report the result on live tweets to a Tableau dashboard.
Reduced client’s customer acquisition costs (CAC) by 37%
Conversions up 16%
Click-through rates (CTR) increased by 27%
The client, an EdTech startup teaches young children the alphabet via a mobile application. Through current teaching methods, the client is unsure of student progress and retention of knowledge taught in lessons. The clients want to leverage technology by “gamifying” the science of learning and tracking all progress.
The idea: codify dopamine into the user experience and make learning enjoyable and engaging, increasing time learning on the app and user retention.
Vacon proposed a speech-to-text API embedded inside the application that hears the student’s voice and scores accuracy of recognition and speed of recall. Positive affirmation is awarded through collectible accolades (badges) for the accomplishment of milestones. Passing the minimum set learning requirements unlocks the next lesson.
How it works:
Voice activity detection (VAC) was conducted on an audio dataset to extract words spoken from large audio files.
After extracting small audio we built a deep learning-based model in Pytorch and classified each audio into 10 classes (0 to 9).
Once achieving +95% test accuracy, a FlaskAPI was created and embedded into the mobile application.
API transcribes and matches within 50 milliseconds with 95% accuracy
Daily Average Users (DAU) increased by 19%
12% increase in retention rate
32% in average session length
A call center with a fleet of Sales Development Representatives (SDRs) are tasked to meet daily KPIs, such as dials and call connections – key metrics for an SDR.
The challenge: dialing a number doesn’t guarantee a connection to a human. Calling phone numbers that ring out and don’t pick up, or divert to voice mail wastes time and is unproductive.
If you’re an SDR, a string of consecutive no answer dials or diverts to voice mail feels demoralizing and demotivating. The role of an SDR is challenging, and maintaining peak mental performance is tough when you can’t talk to a human to do your job.
Vacon created a script that dials the prospect’s number and uses an AI model to classify if a person answers the call, no answer, or diverts to voice mail.
The AI model is a “call classifier” that distinguishes between:
Live human voice
No answer (call rings out)
Voice mail recording
By training the AI model in the 3 classifications, AI is able to connect SDRs to a live human on every call.
100% call connection rate!!!
(previously unheard of in the call center industry)
Zero wasted time on unproductive dials
120% increase in outbound dials
23% uptick in meetings booked
32% increase in sales pipeline
SDRs report higher job satisfaction and happiness with the AI model
University student attendance comes with many challenges. Done manually, it’s a repetitive, time-consuming task, where some students have found ways of ‘gaming the system’. The larger the class, the less likely the teacher or professor remembers each student and the easier it is for fellow students to act as a proxy for absent friends.
The client, a leading university was suffering from a lack of consistency in the process, and losing valuable time due to manual record attendance. Maintaining a student attendance register is a perennial problem for universities that use a manual headcount approach.
Vacon recommended facial recognition technology to automate student attendance using existing CCTV throughout the university. Vacon developed an AI model and web-based software that recognizes students as they walk through entrance hallways scattered throughout the university, and on approach to classes and lecture halls.
Matching was conducted by a one-time biometric photo upload to a database used as the control for machine learning identification. Only one photo per student was required for accurate identification.
How it works:
Detection: The Hog model from the face-recognition module in python performs face detection.
Classification: detected faces are matched to the photo database using embeddings computed from each image.
Embedding: FaceNet computes facial embeddings for every image. A threshold was selected to match face embeddings with the stored DB.
Results: web sockets report results on the website page, including the student’s name and student ID numbers with their photo.
Zero minutes are required for every class to perform repetitive administrative tasks in student attendance.
100% of the time is given back to teachers and professors to do what they do best: teach knowledge.
100% class time for dedicated teaching. No more administrative tasks are conducted during student class time.
99% accuracy in facial recognition conducted within seconds.
Anecdotal results: reported overall reduction in complaints of behavioral misconduct given students were aware cameras recognized their location and movements.
The client is a carpooling and car ride sharing app. App drivers face challenges knowing where to position their vehicles to find waiting passengers, and passengers express frustration trying to find an available driver.
Vacon’s team of data scientists and developers employ the latest in geo-point data tracking and devise a novel solution through predictive modeling.
“That which is measured, gets managed and improves.”
By measuring driver and passenger trip data: routes, location, times and numbers of passengers, data is fed into machine learning models which then predict with 80% certainty, optimal routes, times and locations.
How it works:
A clustering algorithm records geo-points and rides at a given time of the day and groups trips into regions.
Flask API is the layer built on top of the algorithm and retrieves geo-location based on driver’s current position.
Geo-points are retrieved using OSMNX framework in python and parsed into arrays that can be processed via Machine Learning Libraries. Libraries retrieve roads and distances based on car mode of transportation.
The ride sharing app suggests with statistical confidence (from machine learning) where drivers which route and destination to increase the likelihood of finding waiting passengers.
The solution provides exceptional user experience while collecting training data. Now driver and passenger activity, routes, locations and times are constantly improved and refined for optimized driver and passenger experience.
80% predictive accuracy meant drivers booked an average of 32% more passenger rides.
Reduced passenger waiting time by 7 minutes.
Anecdotal results: less carbon emissions, wasted fuel, vehicle running costs, and less traffic congestion (the right vehicle, in the right place, at the right time).
21% more revenue booked by ride sharing app due to increased productivity from route efficiencies.