id
int64 1.57k
21k
| project_link
stringlengths 30
96
| project_description
stringlengths 1
547k
⌀ |
---|---|---|
10,000 | https://devpost.com/software/crazy-cows-game | P1 Cow Sprite
P2 Cow Sprite
Inspiration
Games like Super Smash Bros Brawl and Brawlhallla inspired us.
How we built it
We build it using Unity and C#.
Challenges we ran into
One of the challenges we faced was implementing 2 player interactions.
Accomplishments that we're proud of
We're proud that we created a game that is able to allow its players to bond in an immersive gaming experience.
What we learned
We learned that collaboration is extremely important.
What's next for Crazy Cows Game
We will further develop this game after Bay Hacks by adding extra features to improve our game.
Built With
c#
unity
Try it out
github.com |
10,000 | https://devpost.com/software/covidtrade-1vkgez | COVIDPRO19 is an app that combats the corona virus by enabling hospitals to request members of the community for much needed medical supplies. Many people are unsure on how to help. People who 3-d print face protection and make masks at home can effortlessly contribute towards combating corona virus through the use of this app. Hospitals can simply update their needs in the app when they require much-needed medical supplies. With the CORONAPRO app, another life will not be lost unnecessarily.
Built With
sketch
swift |
10,000 | https://devpost.com/software/remote-patient-monitoring-system-zjfnc5 | Block diagram of remote patient monitoring system
Problem Statement
A potential challenge during the pandemic outbreak like COVID19 is overwhelmed hospitals. At present, the hospitals don’t have the capacity for large number of incoming patients. There is a need for a technology platform which is capable of remote-monitoring and allowing for the engagement of patients in their homes. The capabilities also facilitate communication between quarantined people and the healthcare service and maintain visibility of those recently discharged.
Proposed Solution
Remote Patient Monitoring (RPM) platform offers an ideal way to monitor patients while they are in quarantine or at home. The platform offers an end-to-end solution all the way from devices to central command center dashboard and analytics along with managed services.
The device hub include those that measure vital body temperature, heart rate, blood pressure, SPO2 level in the blood. If the patient clinical situation deteriorates, the system, supported by health-workers oversight, can respond rapidly through central command center and decide if they need more intensive levels of medical care. At the same time, providers are at no risk of exposure to COVID-19 when they manage a patient remotely. It is highly important to preserve a healthy clinician workforce at any time and especially during a health crisis like COVID-19.
During this public health emergency, it is imperative that the Government and healthcare system adapt as the situation warrants to act upon measures to save lives.
Implementation
If small Patient Monitoring could be connected to a network wirelessly, patients would be able to move around freely while their physiological signals are monitored. Thus, medical personnel could be informed about a patient’s critical condition regardless of their whereabouts and they could be treated promptly if an emergency occurs. Furthermore, portable devices can be integrated into the Healthcare environment and used to develop novel applications. Thus, we will develop a portable embedded device that can monitor the condition of patients in real time using biomedical sensor network such as sensor is pulse oximeter (SPO2), thermometer, respiration, blood pressure (BP), and provide various physiological signals via wireless communication so that the physiological signals may be monitored remotely Based on the graphic display (android Smartphone) and web, using Web Server and Database subsystem we can take physiological signals data any where in the word at any time and this device detect emergencies and inform medical personnel when they occur. Thus, medical personnel could be informed about a patient’s critical condition regardless of their whereabouts and they could be treated promptly if an emergency occur
Expected Result
RPM will help to ease of access to patient data. It will help to deliver high quality care to the more patients. It will help to exposure to healthcare staff. RPM is a patient monitoring system. It collects the physiological signals of patients using biomedical sensors network and processes them so they can be interpreted easily by medical personnel. Unlike conventional patient monitoring, RPM is small, portable, battery operated, and based on wireless communication. Thus, our device could be useful in a Healthcare environment where the remote monitoring of patients is essential.
Action Plan after the Hackathon
After the Hackathon, we will develop the idea further and standardize decisions regarding supply and distribution with the platform.
With adequate funding, we will launch this product within 5-6 weeks after the hackathon
Built With
api
iot
web
Try it out
github.com
he-s3.s3.amazonaws.com |
10,000 | https://devpost.com/software/drug-and-food-delivery-robot-for-covid19-patients | DRUG AND FOOD DELIVERY ROBOT FOR COVID19 PATIENTS
We Break the wall of Infecting from Corona we are Built a smart UGV (Unmanned ground vehicle) for Monitoring patients and provide a Food and Medicine to the corona patients. our product prevent people from direct contact to corona patients. And it has a UVC light to kill the Corona virus and infection and Automatic Santizer Dispenser.
The drive subsystem has two autonomously controlled drive wheels with the common axis centered on the robot. A spring suspension ensures that the drive wheels remain in contact with the floor even if it is rough or bumpy and four corner casters offer stability.
The HelpMate is provided with structured and ultrasound light range sensing. This sensing continuously offers a set of range values. These sensors are used for the detection of obstacles and identification of walls helping to determine the orientation and location of the robot.
Ultrasound sensors are also provided at the sides of the robot to offer sufficient information about an obstacle and help in avoiding the same.
Touch sensitive bumpers are also provided at the front and back of HelpMate that may be undetected by the range sensors. An LCD and a keypad act as the user interface. Turn signals and warning lights appear when the HelpMate robot not on the right path.
The HelpMate is provided with a locked backpack for carrying meal trays and lab supplies. An offline CAD system tailored for HelpMate applications generates topographical and geometrical information about the elevator lobbies, hospital hallways, elevators and stations.
Advantages of Automated Delivery Robots
The key benefits of automated delivery robots in hospitals are:
• Medicines are delivered with a higher accountability via an integrated chain of custody systems
• Automated delivery enables pharmacy technicians to focus on performing high-value tasks such as mixing IVs without committing any mistake.
• Delivery of medicines can be more frequent and nurses can concentrate on caring for patients rather than worry about missing medicines and supplies.
• Automated delivery brings down costs and improves on-time reliability.
• Waste can be collected more frequently, improving control of infection and maintaining a neat appearance in the facility.
• Automated waste transport brings down the risk of injuries from the transport of heavy loads.
• Lab test items can be delivered immediately, hence speeding the testing process.
• Some of these systems have call functionality to deliver to departments behind locked doors.
• Accurate tracking of high-priced equipment and supplies ensure that the number of lost items is decreased.
How it Works
The robot receives instructions through a human–machine interface. An installed knowledge base helps the robot to maneuver around the hospital. The robot is equipped to climb stairs, detect obstacles, move in lifts and call for opening or shutting doors.
The robot is loaded with supplies, given specific instructions through the human–machine interface and then it goes to its destination. It offloads the supplies, takes back anything else and moves on.
Built With
hardware
robot
robotalker
uptime-robot
Try it out
smartbackyardcreators.blogspot.com |
10,004 | https://devpost.com/software/filter-point | Impact Networking manages 594 Meraki networks across the nation. Impact has begun to include Umbrella with many of its clients to augment the protection for our roaming end points when those clients are not in the office. One of our field network engineers reached out and asked if there was a way to copy the Filtered Categories, Whitelists, and Blacklists from Meraki to Umbrella. We realized a method to do this did not exist. Shortly after we heard about the Meraki Hackathon and knew exactly what we wanted to work on!
The team was assembled between our Enterprise Development, Managed IT Operations, and Managed Security to make this idea into a reality. The first question was what we are coding this on, with contributors from different backgrounds and different experience this was not a simple question. The network teams were more accustomed to hacking away with Powershell, CURL, PHP and Python, while our enterprise development team was experienced in .Net and low code platforms such as Mendix. After outlining the requirements and the timeframe we opted to go with Mendix!
Built With
css
html
java
javascript
mendix
meraki-api
react |
10,004 | https://devpost.com/software/rapid-retail-android-pandemic-incident-disruptor | API Call
Queue Dashboard
Inspiration
COVID-19. Redefining "normal". Keeping retail space safe for customers and employees.
What it does
COVID-19 has changed everything and we, as human beings, are in the process of redefining "normal". The retail space is one area that needs to rethink the way they operate to ensure the safety of their employees and customers. Team 2 at NTT has developed an idea/application to assist with that. Using Meraki MV cameras and sense API, our clients are able to keep an accurate count of people entering and exiting their store/location. The cameras do the counting while our application feeds the data to a web UI showing the stores capacity, as well as number of people in the store at any given time. Our application also ties in with WebEx Teams to notify employees of thresholds such as reaching capacity, at capacity, or over capacity. This will allow employees to move towards an entrance if capacity is reaching 100% and close doors if capacity is reached. If customers would prefer not to wait outside the entrance we have added a feature where they can scan a QR code and put themselves into the queue.
How we built it
Split the work up as best we could. Abha tackled the MV sense API using Python. Luciano took on the web UI using node.js. Will also used node.js to integrate with WebEx Teams to get notifications. James and myself helped out where we could be as the judges will be able to tell, we have the least amount of coding background in the group.
Challenges we ran into
Two challenges worth noting. First was getting the sense API to work in our environment and second was fixing bugs on our bot when sending out notifications.
Accomplishments that we're proud of
Proud of our global team getting together on such a tight timeline and getting this accomplished
What we learned
API are the future!
What's next for RAPID - Retail Android Pandemic Incident Disruptor
Continue to develop the idea!
Built With
css3
html5
node.js
python
Try it out
c3rb.glitch.me
c3rb.glitch.me |
10,004 | https://devpost.com/software/quickpickup | Operation View 1
Dashboard only view
Operation View 2
Sequence diagram
Inspiration
Hospitality challenges with the current COVID-19 situation, people are waiting for long time for order pickups.
What it does
Contact less order pickup process for restaurant patrons to eliminate the need for phone calls and IVR
How we built it
We used/integrated below mentioned technologies to build our app:
Platform:
Azure Cloud
Docker containers
API:
Meraki MVSense
Meraki Wireless
Cisco Webex Teams
Purple AI
Programming Langauges:
Python
Reactjs
Type-Script
Databases
mongodb
Challenges we ran into
Cross time zones, development working together in collaboration and partner programming
Learning the styles within the team
Accomplishments that We are proud of
Able to work with geographically dispersed team to finish the project in a fast pace manner
Learning the technologies such as Cisco APIs, Purple AI
What we learned
DevNet SDKs
WebEx Teams SDK
MV Sense SDK
APIs with Purple
WebEx Teams for meeting spaces
What's next for QuickPickUp
Diversify for other industries such as Retailers
Improve the security features
Automate deployment for quick spin-up
Built With
azurecloud
docker
merakiwifi
mongodb
mvsenseapi
purpleai
python
react
typescript
webexteamapi
Try it out
40.80.151.67
github.com |
10,004 | https://devpost.com/software/citizen-care-pod | The Insight Connected Platform app can be used on a variety of devices.
A prototype of one of the Citizen Card Pods being built.
A Citizen Care Pod inside the Connected Platform App with a list of hardware deployed in that pod and its status.
A camera reading from one of the Cisco Meraki cameras detecting crowds.
Alerts from that camera notifying a user that the line is getting long at the Citizen Care Pod.
WebEx Teams integration showing a real-time chat about the alert with another employee.
Inspiration
How do we help reopen offices, airports, stadiums, parks, and all public spaces, so people can get back to work (or, more importantly, fun) while also feeling safe? A robust disease prevention strategy is critical to helping create a more protective environment for people and ensuring business continuity.
Insight's Connected Platform can help rapidly evaluate, deploy, test, and manage new technology across our ecosystem of 3,500 partner products and solutions (including Cisco Meraki devices.)
For this hackathon, we focused specifically on Citizen Care Pods, which are portable virus testing centers that can be deployed nearly anywhere, which can be used to aid in detection and screening when you have a large group of people (think construction site, retail or entertainment venue).
What it does
The Citizen Care Pod is outfitted with a variety of Cisco Meraki gear (cameras, SD-WAN, and access points for connectivity) to collect and send data to the Connected Platform so that customers can analyze and be more responsive to real-time data reports.
How we built it
The pods themselves are built with the partnership of a construction company using shipping containers (see prototype image).
The Connected Platform app is built in partnership with Insight's CDCT and Digital Innovation teams. It leverages the Cisco Meraki API, the MV Sense API, and the Webex Teams widget as well an Angular for the front end. It pulls together the data sent from the devices in the pod into a real-time dashboard with insights, alerts, and sensor readings.
Challenges we ran into
We ran into challenges trying to get the Guest Access integration for Webex Teams, allowing people that weren't part of an organization to interact via Teams. We had to use a standard integration allowing only people previously in the organization.
We also hit the limitation with the MV Sense API that only allows still images, not a feed.
Accomplishments that we're proud of
We're proud of getting the Meraki APIs and what we could do with Webex Teams integrated into our Connected Platform in a short amount of time.
What we learned
The Meraki platform is agile and can be integrated into solutions much quicker and offers better insights than other networking providers in this space. The design and API access is accessible for networking novices, or even non-network people such as software developers.
The MV Sense API exposes a lot of potential use for social policy enforcement as well as the health and safety of customers and employees.
What's next for Citizen Care Pod
This is a real go-to-market solution for Insight to take Meraki to our customers. The next phase is to build and deploy these for our customers in our communities in Cisco's fiscal year.
We will use Webex Teams Guest Access so that a person doesn't have to be a member of an organization to interact with someone via Teams. Potential leveraging of DNA spaces for guest access providing customizable captive portals for guest onboarding and location analytics.
Looking at Enterprise Wireless APs and evaluating APIs for Cisco Access Points.
Integrate with DUO for multi-factor authentication.
Built With
angular.js
api
cisco-webex
meraki
mr
mv
mx
sd-wan
webex
Try it out
cisco-meraki-278014.ue.r.appspot.com |
10,004 | https://devpost.com/software/norris | Norris
Inspiration
Malware and Ransomware are becoming all too common. Left unchecked, these threats can devastate corporate networks like what happened to Maersk when the NotPetya attack was unleashed against Ukraine.
While tools like Stealthwatch can help identify which systems have been compromised, a gap exists between identifying the affected clients and taking swift action to prevent a further spread.
What it does
Visualizes the location and threat assessment of wired and wireless clients. Enables single button quarantine of malicious clients using APIs to automate changing vlans and firewall rules to isolate these devices from the rest of the network, until a technician can be dispatched to resolve the situation.
How I built it
We built a containerized application stack with a React front-end and Express back-end. In the front-end, we leveraged Google Maps and placed overlays for the building floor plan and positioned network clients, which we color-coded and provided "quarantine" and "release" actions for suspicious devices. In the back-end, we created collectors that fetch Meraki API data about network clients and firewall rules, listen to the Meraki Scanning API for device locations, and interrogate StealthWatch for the devices which are behaving suspiciously. We then correlated this data to tag the devices with an appropriate risk level in the user interface. When a user "quarantines" a device in the front-end, the back-end uses the Meraki API to apply firewall rules which isolate the device.
Challenges I ran into
The single biggest challenge in trying to create the software was the lack of data in the lab environment. We were quite surprised to find that the lab networks had no clients of any kind, nor was a suitable StealthWatch instance available. We were able to use our in-house CMNA stack to provide the Meraki data. And thanks to
WWT's ATC Lab
, we were able to secure a StealthWatch instance, a significant perk of being part of the WWT ecosystem.
Accomplishments that I'm proud of
The team has thought about this concept for a bit, and, it was fun and satisfying to see a working solution in a matter of a couple days.
What I learned
Stealthwatch works very easily with our Meraki network and provides data we otherwise wouldn't have regarding traffic on the network.
What's next for NORRIS
The Thelios team at WWT builds a product for provisioning and monitoring many Meraki networks. NORRIS is an additional monitoring feature that could be introduced in the near future.
Disclaimer
Chuck Norris has no affiliation nor endorsement with Thelios, WWT, or NORRIS. Chuck, if you're reading this, please don't roundhouse us, we're just big fans. 🙏
Built With
docker
google-maps
meraki
node.js
postgresql
react
recoil
rxjs
stealthwatch
timescaledb |
10,004 | https://devpost.com/software/automated-healthcare-network-commissioning-app | Automated Network Deployment & Support Healthcare App on web browser
Simply enter in the Hospital ID and barcode in the Meraki serial numbers, then there is simply only one button to press
The Meraki Dashboard now shows that the Field Hospital London has been created.
At the same time as the new Meraki Network being provisioned a WebEx Teams space is automatically created for that specific Meraki Network
Various parameters from the spreadsheet are configured on the Meraki MX, MS and MR
Network Tags assigned based on data from spreadsheet.
Meraki MS Switch profiles auto bound to switchports to ensure consistency.
Meraki equipment hostname and addresses are auto configured based on the data from the spreadsheet.
Inspiration
Inspired by recent world events, and in particular the NHS Nightingale Hospital requirements within the UK, CAE has identified a requirement within the healthcare sector to be able to rapidly deploy IT network infrastructure. This infrastructure is required at various locations including pop-up hospitals, COVID-19 testing sites and community outreach support centres that are rapidly provisioned to support shielding of vulnerable people and increase care capacity.
Due to the critical and time sensitive nature of work within the healthcare industry any opportunity to minimise the complexity and time to commission new infrastructure is essential, therefore we believe leveraging the integration available within the Meraki platform can deliver this goal.
What it does
Using the Meraki Dashboard API, the CAE application is a single step Meraki deployment tool which enables the healthcare industry to setup an entire suite of Meraki solutions using a single step action.
As demonstrated in the video submission all a user of the application has to do is simply enter the hospital ID number and barcode in the serial numbers of the Meraki devices (MX/MR/MS) that will be going to site, all of the configuration is then completed automatically in the background by pulling the data variables from a user friendly spreadsheet. This includes the automatic configuration of hostnames, network tags, IP schemas, VLAN assignment, switchport assignment and much more, resulting in saving hours of configuration time and money.
Furthermore, as part of the single step action the application integrates into the WebEx Teams API to automatically create a new Space specifically for that newly created Meraki network on the Dashboard and adds in the on-demand CAE NOC (Network Operations Centre) support group.
This on-demand NOC support group combines a Dialogflow bot, to automate addressing several common deployment issues, with a 24/7 support presence to quickly help support the newly created network and the physical deployment / go-live. This additional support presence is not the standard first line support, instead it consists of a real-time SLA with a dedicated team of specialised Meraki engineering resources at CAE helping to get networks online rapidly and with more agility.
The advantages of this CAE created application is as follows:
• Risk mitigation – through the rapid and consistent deployment of infrastructure leveraging automation and the underlying APIs. In the context of the pandemic, this is absolutely critical as resources can be up and running rapidly with full confidence in their capabilities allowing patients to be treated faster;
• Significantly reduced network deployment times – achieved through automation and integration facilitating the decrease of the touch points from factory to field engineer. As a result, care facilities can be stood up in a shorter timeframe which benefits the primary aim of the facilities;
• Reduced (CapEx) deployment costs – through the removal of the need to prestage hardware and send highly qualified networking-orientated resource to deploy. This allows a right-sized professional services effort to be leveraged, including the use of non-IT skilled resources, widening the pool of available resources which can be used. Ultimately, this also ensures that the available budget for healthcare can be used optimally and with more focus on the care of patients;
• User friendly and simplified interface – which heavily reduces human input errors allowing equipment to be provisioned quicker. In addition to these characteristics, integral support for a handheld barcode scanner allows deployment engineers to scan and register Meraki devices directly into the application with minimal effort;
• Configuration standardisation and efficient updates – which is critically important throughout a large scale
install base and allows confidence in the capabilities of the underlying environment. This is achieved through integration and auto binding of configuration templates, which in turn will assist in future “change at scale” configuration updates;
• Targeted support – allowing on-site deployment resources to access the on-demand NOC group automatically created as part of the deployment. This facilitates faster and more personalised responses coupled with a reduced time to resolution for issues faced whilst deploying the Meraki solution.
How I built it
We utilised the Meraki Dashboard API to automate the process of creating a new network and deploying Meraki infrastructure across this. The application has been built using ASP.net framework. Using RestSharp we built a HTTP client library from which we could then perform REST based API calls and also deserialise REST responses. This logic allows for the automated deployment so long as the hospital/site number and Meraki device serials are inputted. The application itself has been hosted on a Microsoft IIS webserver.
The same framework has been used to integrate with the WebEx Teams API allowing automatic room creation and for members to be dynamically assigned. These WebEx Teams spaces allow field engineering teams or none technical staff to have an immediate dedicated network support team available to them. Members of these WebEx Teams are members of the NOC as well as Chatbots created using Dialogflow API. An extensive knowledgebase of Meraki issues and troubleshooting steps was put together based on CAE’s networking and support experience. Dialogflow offers integration to this knowledgebase allowing for a whole suite of issues to be troubleshooted via chat automatically. The human members of this group offer additional support to primarily assist with any additional issues and can also feedback repeat issues/question into the existing knowledgebase for future automation.
Challenges I ran into
Our team’s skillset is primarily in network support. As such, the main challenge we faced was being presented with an entirely new challenge and area of knowledge we had to upskill in. We utilised existing documentation, guides, and resources in order to produce our application.
When building the Dialogflow chatbot integration a substantial amount of time was required to populate the knowledge base for these bots to leverage. Although time consuming, we were aided by our background within network support.
Accomplishments that I'm proud of
Utilising existing skillset/knowledge and utilising this to our advantage (building on chatbot knowledge base).
Upskilling and knowledge obtained from all the team who participated in putting together our application. Exposed members of the team to; ASP.net, RestSharp, Dialogflow and more.
Confirmed the real-life benefits of our application with those working within the healthcare sector, primarily saving time, money, and man-power. Please see additional evidence “Feedback from Harling & Michael Sobell House Hospice” from an engagement with the healthcare sector during the Hackathon.
What I learned
Gained exposure and additional knowledge in a variety of areas; ASP.net, RestSharp, JSON, Diaglow and more as well as a better understanding of the various API's used.
Explored a variety of different problem-solving models/frameworks to initially decide on the application; SURF, 7 point problem solving model.
The economic and customer benefits associated with the designed application. Following a project from initial idea to a developed concept with feedback from professionals within the target market. How to expand on the existing application and create other projects.
What's next for Automated Network Deployment & Support Healthcare App
We would really like to look at formalising a "Healthcare-as-Code" practise and methodology by using integration with the Meraki Dashboard API and others that were part of this Hackathon.
Also we explored locating key health care workers and also infectious patients within a facility using BLE beacons. This can be used to track contact and can also be placed on equipment such as hand sanitation stations and ventilator equipment. Also integrating with the Cisco Vision Dynamic Signage Solution we could display and alert if certain staff haven't come in contact and close proximity with a BLE beacon tagged sanitation station for a certain number of hours so should be prompted visually on the Ward digital sign.
Additional management of Meraki dashboard from the NOC support group within WebEx Teams space using the API, for example being able to reload and diagnose Meraki equipment from Teams Space.
Built With
.net
asp.net
dialogflow-api
http
iis
meraki-dashboard-api
restsharp
webex-teams-api |
10,004 | https://devpost.com/software/3data-analytics | 2D view from desktop of 3D network graph inside the virtual command center.
In VR view of Apollo and network heatmaps with MV camera locations.
3D rendering of Capital Factory building.
Inspiration
Due to Covid-19, our team now works entirely from home. Working remotely makes it harder to do tasks typically performed in a physical setting. For security analysts accustomed to working in a network operations center (NOC), remotely monitoring the health of an IT Network via Meraki can be a challenge. We saw this as an opportunity to combine 3Data Analytics VR technology and Meraki APIs to create a virtual NOC environment.
What it does
Our hackathon project, named Apollo Insights, is a voice-controlled interface that allows you to visualize and control Meraki endpoints in real-time, keeping operators in sync and simplifying Meraki network monitoring and management. Apollo Insights is an event-driven alert service for Meraki endpoints, pushing alerts through Webex teams and visualizing your Meraki Network alerts in the 3Data Virtual Command environment allowing remote operators to visualize their full Meraki Network in Virtual Reality.
How we built it
We integrated the Meraki Dashboard API by setting up a Meraki webhook to notify the Apollo Insights engine whenever there is an anomaly on the Meraki network. We developed an alert system that allows our Apollo Insights engine to quickly dispatch a notification to all of your Webex enabled devices when an anomaly is detected. This notification provides a link allowing users to seamlessly enter the 3Data Analytics Platform and visualize the related network data in Virtual Reality, giving Meraki operators the full context of the alert.
On top of the 3Data platform, we built a component to get the RTSP feed from the Meraki Dashboard API. The RTSP stream allows us to stream live video from compatible cameras. In Virtual Reality, we texture a 3D object, process the video format and undistort the camera feed, enabling security analysts to view the camera feed as if they were actually standing in the room. Our video component makes polling 360* cameras intuitive. This is something you simply cannot do in 2D.
Technology
A-Frame: A-Frame is a framework for creating VR environments, and we used it especially for the VR Camera view.
Meraki Dashboard API
Meraki Webhooks API
Cisco Webex teams API
Meraki camera feeds
Challenges we ran into
The Meraki Fisheye Camera gives us streams that are distorted. There wasn’t much information online about how to undistort it for use in VR, but eventually, we found a UV geometry mapping that worked well.
Maintaining two application states for managing notifications is difficult in the sense that each state always has to be in sync with the other. This presents challenges when working with a remote server where you have far less control over the environment and how you access it.
A well-known challenge when building software that will run in a virtual environment is how to get users in and out of that application effectively. One of the ways where we attempt to solve this problem is with a link provided by the Apollo Insights service which takes a user straight to a 3D visual representation of their data.
Accomplishments that we're proud of
One of the design principles behind the Meraki Webhooks API is an event driven architecture. Building on top of this pattern, we were able to design a performant application that takes full advantage of Node.js native strengths in asynchronous programming.
It was really satisfying to view camera feeds in VR
What we learned
Most of this project was getting a handle on the various APIs involved, which had its own learning curve.
Learned how to map fisheye camera feed to a sphere in VR
What's next for 3Data Analytics
We were limited by the time frame of a hackathon. One thing we’d like to support in the future is more notifications. Currently, our notification is attached to the Access Point Down webhook, but there’s no reason we couldn’t add other hooks in the future. For events like when Network Usage exceeds a certain amount, we could change the context that Apollo brings up to display network usage over time. Webhooks make this task relatively simple.
Built With
aframe
cisco
cisco-webex
javascript
meraki
node.js
webex |
10,004 | https://devpost.com/software/contact-less-front-desk-enabling-social-distancing | Kios Emulator - First page
Person detected by Meraki camera
Check in confirmed
New webex room created
Inspiration
Our global community is experiencing unprecedented times due to the COVID-19 pandemic and social distancing will remain moving forward, resulting in the need for technologies to evolve to allow for more contactless user experiences. Our idea is to implement a
contactless check-in
experience for hotel guests to protect the health and safety of both the guests and staff.
What it does
Meraki camera detects person as soon as guest enters hotel lobby. Real time snapshot is sent to the backend system for identification. If the person is identified as new guest, prompt will appear on the Kiosk asking user to scan QR code. Once QR code is verified, check-in information is provided to guest. Simultaneously a new Webex Teams Space is created where guests can reach out to various hotel staff. This provides a seamless and contactless user experience at hotels.
How we built it
Using
Meraki MV Sense API
we built the person detector. We also used
Meraki camera API
to take the live snapshot which will be sent to a backend database for identification. We used
Webex Teams API
to create a new Space for every new check-in. Our core application is built in
Python3
. We built the UI to emulate the kiosk using
react, type script, and materials
.
Challenges we ran into
One challenges was using the MV Sense API. We had to create polling app to continuously monitor the camera to detect a person.
Accomplishments that we're proud of
We collaborated with different members for the first time, yet worked in harmony. We are very proud of that and eventually had a lot of fun.
What we learned
Most of the team members got firsthand experience to work with Meraki APIs and webex teams APIs which was great. Some of us were new to programming and this was a great opportunity to learn some programming skills.
What's next for Contact-less Front Desk - Enabling Social Distancing
We will add more features and optimize the check-in process
Built With
apache-tomcat
camera-api
flask
flaskrestplus
material
meraki-sdk
mvsense-api
python
react
typescript
webexteams-api
webexteamssdk |
10,004 | https://devpost.com/software/meraki-app-for-splunk-phantom | Resources
Inspiration
An investment management financial services company has increased their remote workers from 400 to 2,700 agents supported primarily by the Cisco Meraki Z3 Cloud Managed Teleworker Gateway.
The firm requires stringent access controls of the devices (only corporate IP Phones and laptops) connected to the gateway. The security analyst(s) must quarantine the teleworker if unauthorized devices are discovered on the teleworker gateway.
What it does
The Meraki app for Splunk Phantom was enhanced to include a 'bind network' function, allowing the security operations team to specify the target network and the name of the quarantine template to apply to the teleworker.
How I built it
The Meraki app for Splunk Phantom uses the Meraki dashboard API to locate end-user devices within one or more organizations, networks / devices, and to bind a configuration template to a specified network.
By using the REST API of Splunk Phantom, security incidents (containers and artifacts) can be created and playbooks are programmatically initiated invoking the Meraki app functionality.
It is assumed the organization can identify the presence of unauthorized devices by way of log analysis or a host PC agent distributed scan. From these tools, the source MAC address and other supporting information are populated into a Common Event Format (CEF) record. The CEF data is part of the Phantom container and artifact generated by a program using the Phantom Ingest SDK.
Splunk Phantom will invoke a playbook which executes the Meraki app after the container is created on Phantom. The first step is to locate the name of the network where the source MAC address is found. The second step is to bind a quarantine network template to the targeted network name.
The results of these operations are returned to Phantom and logged.
This workflow can execute without human intervention to the point of end-user notification and remediation.
Challenges I ran into
Network with types camera cannot be bound to templates.
Accomplishments that I'm proud of
The quarantine template can be applied without human intervention.
What I learned
The Splunk Phantom instance is deployed as an AWS instance, this app demonstrates integration of cloud managed services.
What's next for Meraki app for Splunk Phantom
Deployment by the WWT Meraki managed services team.
Built With
meraki
phantom
python
splunk
Try it out
github.com
github.com |
10,004 | https://devpost.com/software/meraki-client-search | Sharing of CSV file on webex.
Loggin of database synchronization
Inspiration
Rolling out ISE and access-policies across large organizations has required me to spend a lot of time on creating various scripts to collect client data to build the policies. After and during the rollout it has also been useful as a sanity check.
I've also had customers ask "where are all our videos servers, credit card terminals (and so on) located." This tool gives easy access to all that information, and doesn't required users to look at each network in the organization or even get access to the dashboard.
Waiting for a script to collect all client data can take hours for large organizations, so putting it in a database and query that is a timesaver.
What it does
Provides an interface to search for clients across the entire organization. It also logs all events for the database synchronization to a Webex room as well as allows users to share the search result in the Webex room as a CSV file.
How I built it
Its a Django Application running on Heroku. Bootstrap frontend.
Challenges I ran into
The larges challenge was figuring out to run the database synchronization in Heroku using a worker since it can take so long. Another issue was handling the logging to Webex using threading and a logging table.
Accomplishments that I'm proud of
It looks great!
DEMO
URL: merakisearch.herokuapp.com
user: demo
password: ilovemeraki
Built With
bootstrap
django
heroku
jquery
python
Try it out
merakisearch.herokuapp.com |
10,007 | https://devpost.com/software/spilt-coffee-gwzinv | spilt coffee screens (search history, results, competitive analysis)
spilt coffee logo
GIF
spilt coffee animation
☕️ spilt coffee
Media monitoring and brand reputation trends powered by AI.
We help small businesses use wit.ai to take control of their brand by giving them instant access to and sentiment analysis on mentions across social, reviews, news and more.
✨ Mission
We believe everyone has room to grow and thrive. We commoditize big data for small businesses, reviews are better when they’re heard. No one should be left out because the cost is too great or the technology too complex. So we build easy tools to empower businesses to take control of their brand. Tools that make media monitoring, competitive analysis and reputation tracking effortless—spill the coffee.
📈 Features
SEARCH
We scrape data from Yelp, Twitter and News about your brand (and any brand), bucket reviews by sentiment and display results on user-friendly charts.
MONITOR
We automate searches to easily monitor brand sentiments over time, and display historical trends on this data.
SENTIMENT ANALYSIS
We utilize advanced sentiment detection tools like VADER and wit.ai to segment positive, negative and neutral mentions, and assign overall sentiment to each mention.
COMPETITIVE ANALYSIS
We allow businesses to compare and overlay competitor data with their own, keeping up to date with what the
people
are saying.
🧱 Architecture
A brief overview of our application, with some key features (green) on the left and how we handle them on the backend on the right.
🔮 NLP Model
MODEL
Sentiment analysis was conducted based on an ensemble model aggregating both the
VADER model
and Facebook’s wit.ai NLP model. Train and test data was primarily formed from Yelp’s open data set (>8,000,000 user reviews) and the Sentiment140 Twitter dataset.
IMPLEMENTATION
We utilized wit.ai's sentiment analysis to rate and categorize reviews. We trained entity recognition to pull out and rank the most common entities in "very negative" and "very positive" reviews. We can then see what users tend to have issues with, and provide actionable recommendations to improve these businesses.
ACCURACY
Running our ensemble model on a subset (test) dataset, we achieved an accuracy 73.35% on a set of 2000 test Yelp points as well as a 74.25% for the Sentiment140 set on their given test set of 497 tweets.
NEXT STEPS
We hope to add more robust entity tagging as well as bi- and tri-gram recognition in order to better provide value for businesses. We also hope to extend the pre-trained wit.ai sentiment model to a five category model (very negative, negative, neutral, positive, very positive) for better bucketing.
💻 Tech Stack
UI frameworks:
ElasticUI
Recharts
Frontend:
React.js
Backend:
Django
DB:
PostgreSQL
Authentication:
Auth0
Model:
VADER
wit.ai
✔️ To Do
☐ Provide helpful feedback and insights for businesses (actionable recommendations!).
☐ Perform more in depth competitor sentiment analysis, and ability to recognize competitors.
☐ Allow users to mark wrong sentiments (and correct them). Our models aren't perfect, we have room to grow too!
☐ We already provide a set of content marked as "extremely negative" or "extremely positive". Now, it's time to extrapolate reasons and analyze severity.
☐ Scrape more platforms (Facebook, Instagram, more news sources, etc.)
👻 Fun Facts
In our first (virtual) meeting where we were struggling to decide on our product name, one member spilled coffee on himself—with that, "spilt coffee" was born.
"8 million rows [of yelp review data] is a lot of rows."
Built With
django
firebase
heroku
postgresql
react
square
twitter
wit.ai
yelp
Try it out
spilt-coffee.web.app |
10,007 | https://devpost.com/software/robin-accountant | Robin's Website
A typical conversation with Robin
Robin's State Machine
Inspiration
Budgeting and personal finance is a challenge for many people, and a large percentage of the population lives paycheck-to-paycheck, having little to no savings. Extreme circumstances like the current Covid-19 pandemic affect the most vulnerable people the most.
We have decided to leverage the power of Wit.ai to build an easy-to-use chat bot that assists people with budgeting and tracking expenses in an effort to empower people to stay on top of their finances and to put some fun into personal finance. Wit.ai supports a direct and natural way to interact with technology through language, making book-keeping easy and accessible.
What it does
Robin allows users to quickly set up a budget and keep track of expenses. Users can add expenses whenever they occur just by pulling out their phones and leaving a text or quick voice message. Robin is then able to do calculations on these expenses and tell the users how much of their weekly budget is left, when expenses have been incurred, what expenses have been incurred over what period of time, and so on.
How we built it
Robin lives inside of a TypeScript Cloud Function and is hosted on Firebase. Incoming messages from Messenger and Telegram get forwarded to Robin for processing. Messages are analyzed based on the current state of the conversation (backed by a database and a state machine) and are then forwarded to Wit.ai, which returns a list of intents, entities, and traits (we have also implemented support for voice message and audio conversion). We then process those intents, entities, and traits, and match them against the current state of the conversation. Robin then produces a list of reply messages to be send back to the user and a list of actions to be carried out, along with the new state of the conversation that is then persisted in the database and loaded again once the next message for the particular user arrives.
Challenges we ran into
As mentioned above, the implementation of a chat bot's logic can become very complex very quickly. The major challenge we faced was dealing with that complexity in a way that allows more functionality to be added without increasing complexity exponentially. Another challenge we faced goes hand-in-hand: debugging complex, interwoven state can be difficult and time-consuming. Extensive logging and tracing really helped a lot here and is something to remember for the next project. Finally, we had to come up with a custom solution to support voice messages because Wit.ai does not natively support the voice message formats of Telegram/Messenger.
Accomplishments that we're proud of
We were able to set up an MVP of a chat bot that implements the functionality discussed above. The system works well supports both Messenger and Telegram, and can be extended to other chat clients trivially. We were also able to get voice support running through a custom audio convert solution.
What we learned
Sometimes things that seem simple on the surface turn out to be much more difficult when observed in detail. The first few Wit.ai intents were quickly implemented (e.g. tell_joke and greeting), but the more complicated ones such as add_expense quickly lead to a state explosion that required us to change our approach from a direct implementation of the logic to an indirect solution through a state machine. We also learned that it is a good idea to keep things more generic from the start in order to be able to support multiple back-ends (Messenger/Telegram) without excessive refactoring sessions.
What's next for Robin Accountant
Currently, Robin is in MVP status and the functionality needs to be refined a little more. A feature we'd particularly like to implement is support for sending pictures of receipts that will then be tracked by Robin. This would allow users to keep track of important expenses and come in handy for tax time.
Built With
firebase
messenger
telegram
typescript
wit.ai
Try it out
robin.silentbyte.com
github.com
t.me |
10,007 | https://devpost.com/software/otto-v05m26 | Final stage: Code output and open in Google Collab to try it out
K-Nearest Neighbor training & visualization
Linear Regression training & visualization
Neural Network builder with Otto integration
Preview sample datasets in-browser. These are standard sklearn datasets
Otto - Task Inference and Recommendation
Otto: Your friendly machine learning assistant.
Build machine learning pipelines through natural language conversation
Otto is an intelligent chat application, designed to help aspiring machine learning engineers
go from idea to implementation with zero domain knowledge
. Our website features easy model selection, insightful visualizations, and an intuitive natural language experience guiding you every step of the way. A collection of four Wit backend apps service Otto's conversational abilities and machine learning tools.
We encourage you to explore our
GitHub readme
for an animated look at what Otto offers!
Highlights
Beginner-friendly design.
Otto is made for novices, as it assumes no prior knowledge of machine learning. Users simply describe their end goals to obtain intelligent recommendations, or can choose from sample datasets to harness our models in an instant.
Powerful machine learning tools.
A range of machine learning capabilities are supported, including models for regression, classification and natural language processing, as well as preprocessors tailored to your problem. Play with neural networks, explore data visualizations, and generate ready-made Python code right in your browser!
Educational experience.
Users are walked through each stage of the process, with Otto explaining terminology when needed. Annotated code blocks provide eager learners a high-level understanding of their end-to-end pipeline.
Quick Start
To demo some of Otto's main features, try out the following:
Say:
I want to label flower species by petal length
to watch Otto prefill your pipeline and render a nearest neighbors classification on the popular Iris dataset.
Select:
Regression > Sample Dataset
to preview sample datasets for regression, and discover the strongest predictors using different best fit lines
Say:
Detect fraudulent credit card activity
and select the Custom Dataset option to experience Otto's model recommendation system and interactive neural network designer.
Say:
I'd like to interpret the mood of a review
to query Wit-powered natural language models for live results.
and feel free to get creative! Come up with your own ML goals and see where Otto takes you.
Stages
Below is a step-by-step breakdown intended for the technical reader.
Task
One of the biggest obstacles faced by those just getting started with ML is the abundance of jargon, from “loss functions” to “contour boundaries“ — beginners can't be expected to decide what model to use based on cryptic terminology, let alone develop one from scratch! Otto narrows down your options by inferring the high-level task at hand from a simple objective statement.
Task inference is powered by a Wit application (
Otto-Task
) trained on 300 such statements (e.g. “I want to detect loan applications as fraudulent”, “help me forecast stock prices”, or “let's summarize an article into a paragraph”) derived from real-world machine learning research.
Otto-Task
attempts to categorize the
task
intent as regression, classification, or natural language processing, and additionally extracts a
subject
entity embodying a streamlined form of the objective in order to filter out extraneous words.
The subject is parsed for keyword matches (“tweets”, “housing”, etc) against our database of sample datasets. If a relevant dataset is found, Otto pulls the optimal task, model, and preprocessors for the dataset and pre-selects them for the user throughout the pipeline-building process. Otherwise, Otto issues a task recommendation based on the recognized intent. And if no intent was identified, the user is provided with some tips to help them pick the best task themselves.
Dataset
Users are recommended a specific sample dataset matching their subject, or otherwise offered to preview and choose one themselves. Sample data allows beginners to prototype models quickly and easily, without the complexity of finding a dataset and figuring out the relevant features among dozens. Users may also opt to proceed with their own data, which they can include later on in the generated code.
Model
If the user opted for custom data, Otto leverages Wit to perform the key step of selecting a classifier or regressor. A Wit client (
Otto-Model
) parses a brief user description of their data for key phrases indicating the desirability of a particular model.
Otto-Model
includes around 15 phrases and synonyms per model and performs fuzzy string matching, making it an effective and scalable technique for model recommendation.
A characterization of the classification dataset as “simple” or having “just a few columns”, would make the K-Nearest Neighbors algorithm a good choice, while a description of the regression data as “crime rates” or “annual consumer rankings” would suggest a Poisson or ordinal model, respectively. If no phrase is flagged, Otto will default to the most general model available: a Neural Network for classification, or a linear fit for regression.
In the case of a natural language task, users can combine multiple models together for a more comprehensive analysis. Otto will recommend both sentiment analysis and entity recognition models, but provides users with information about both in case they'd like to adjust this. Our NLP models are built on a Wit backend (
Otto-NLP
) configured to identify
built-in traits and entities
.
Supported models:
Model Name
Task
Description
K-Nearest Neighbors
Classification
Draws class regions by looking at surrounding data
Neural Network
Classification
Deep learning model suitable for complex datasets
Linear
Regression
Ordinary linear relationship between variables
Poisson
Regression
Models count data, which tends to follow a Poisson distribution
Ordinal
Regression
Learns rankings (e.g. "on a scale of 1-5")
Sentiment Analysis
Natural Language
Detects polarity, expressions of thanks, and greetings/goodbyes
Entity Recognition
Natural Language
Extracts structures such as people, times & locations, and works of art
Preprocessors
What good is a fancy model if it takes ages to train? In this step, Otto swoops in with handpicked preprocessors for the user's data and model selections, abstracting away the intricacies of feature engineering and dimensionality reduction — machine learning techniques that optimize the data for efficient learning. As always, users can override the recommendations.
Supported preprocessors:
Preprocessor Name
Description
Principal Component Analysis
Performs dimensionality reduction and/or feature selection
Normalization
Scales data to have mean centered at 0 and unit variance
Text Cleaning
Removes emojis, noisy symbols, and leading/trailing whitespace
Visualization
The visualization stage activates for neural network design, or to render any models built on sample data.
Neural Network
Satisfy your curious mind with our fun, interactive network builder!
Otto preconfigures a standard model architecture with
research-based
activations and initializers, but users are free to tinker with it layer by layer as they wish. Additionally, Otto can make network redesigns en masse with the aid of a dedicated Wit model (
Otto-Net
) that translates user instructions into architecture changes.
Model Visualization (Sample)
Instantly explore how parameters affect KNN clusters and regression slopes!
Code Display
All done! With your data sorted out, preprocessors set, and model configured, Otto gives you a nice view of your work.
Future
Otto's modular design makes it readibly extensible, and its use of Wit means its natural language capabilities can be extended to even more domains. Here are just a few things planned for Otto:
More models:
logistic regression, support vector machines, decision trees
New tasks:
data generation (e.g. GANs), speech recognition
Smarter NLP:
being able to ask Otto to explain machine learning concepts or describe the difference between options
About
Kartik Chugh
Kartik is an incoming second-year at the University of Virginia, currently an AI intern at Amazon Alexa. An avid open-source contributor, he is passionate about API design and developing only the coolest machine learning tools :)
Sanuj Bhatia
Sanuj hopes he has a good chance at the hackathon, as it might have something to do with him being a Software Engineer at Facebook. He loves building interactive React-based applications, and likes to introduce and then fix bugs for maximum impact :D
Built With
facebook-duckling
facebook-nlp
material-ui
node.js
react
wit
wit.ai
Try it out
ottoml.online
github.com |
10,007 | https://devpost.com/software/fibonaccis-s | Start a chat
Chat
Login
Initial Page
Recipes
Example of suggestion
Recipe
Profile
Inspiration
Eating, one of the fundamental needs of the human being, one of the greatest pleasures in life, and despite the fact that we all love to eat, very few of us know how to cook, and not knowing how to do it not only lies in the need to depend on someone to eat, It can also cause problems. According to the British chef Jamie Oliver, the obesity problem that is rooted in the United Kingdom is mostly due to the lack of knowledge to prepare food, which makes us depend on junk food or simply not eating, and this can occur not only in United Kingdom but worldwide.
Our inspirations is taken from the problem that a lot of people have the limitance in kitchen, are afraid to get close to it and prepare their favorite dish. Nowadays, for the quarantine around the world, a lot of people want to take the iniciative to learn, and Fibonacci could help in this task, making prepare a recipe in a funny process, learning until you play.
What it does
FIBONACCI is application that contains recipes that you cook, you can make step by step alone, or with the company of LEO, an assistant created in WIT.AI to have a fluid conversation while creating your favorite dish. FIBONACCI, in addition to being a cookbook, also has a touch of gamification, which allows you to challenge yourself and your friends to learn new dishes and increase your skills in the kitchen, until you become the best cook in the app, and in the LEO sous chef at his prestigious restaurant Fibonacci.
How I built it
-Wit.ai -Vue -Node.js -Firebase
We builr an serverless API with REST, hosted in Firebases Docs:
https://documenter.getpostman.com/view/11046751/Szzn5wDM
And the PWA, in Vue.js conects to it We use Firebase for auth and db
Challenges I ran into
Learning the integration of wit.ai with our own web app was one of the biggest challenges that we face, fortunately, we could passed by and have a very nice final result.
Accomplishments that I'm proud of
We are proud of Leo, our chef bot maked with wit.ia, second the design of the app and the powerful api the core of the chat!
What we learned
We learned how to create our own Api Rest with the information of the recipes so we can use them in our web app, we learned how to manage backend loading information from the database and storing information to the database, also front end with Vue. We learned how to communicate with Wit.ai API, we learned on how to create different intents and how to use entities with useful data so we can change the flow of the conversation.
What's next for Fibonaccis's
We can make cooking more than a laborious task, it could become a game for many people who fear starting on this path, and the potential of the idea is fully scalable, supplying one of the basic needs of the human being, and in a future, make interactions through speech, send challenges to your friends to cook together, know the price of the ingredients that you need to cook, share in social media all the recipes that you prepare and know what all your friends said about it, create an entire ecosystem around this app.
Built With
Built With
apidojo
express.js
firebase
node.js
vue.js
vuetify
wit.ai
Try it out
fibonacci-app.web.app
github.com
github.com |
10,007 | https://devpost.com/software/memorai | Inspiration
Alzheimer's is a very common disease that has been flowing on from years yet there is not cure to it. There is no greater pain than actually forgetting the ones you love and it is even worse for people who are very close to them. We know that it is a very difficult process to go through and we came up with this idea of using the modern tech to slightly better the lives of these patients.
How we build it
We brainstormed for a bit and thought about the various implementations a chatbot could have that could not only
benefit us but also help society and people in society and in general, make lives better.
That's when we came with the idea of creating an app that could help Alzheimer's patients with their day to day tasks.
Alzheimer's patients have it tough, depending on the degree of the disease, their ability to perform trivial tasks can vary.
They use sticky notes to try and remember basic things and in general, have to depend on family to live life.
Enter MemorAi, a chatbot integrated into an app, memorai. This chatbot is like a personal assisstant to the
Alzheimer's patient! It helps with daily tasks and answers basic questions that the patient might have.
Reminders can be set by just asking the bot to do so! Apart from this, it also keeps track of close contacts which
can be accessed by the patient, merely by asking for the same.
In a likely scenario where the patient might feel like he/she is forgetting something, memorai can step in and really help out. It's easy to interact capability helps with common problems that a patient might face such as forgetting to take his/her medicines or forgetting their way home. In any case, memorai has the patient covered and the patient will feel safe.
The frontend was built using flutter and dart. Multiple plugins were used and additional features like patient login
and a memory game were incorporated. The interface is simple and easy to use. Even people not too familiar with smart devices should not have much of an issue navigating around the app.
The backend included some python to access wit.ai. The bot was trained to handle several different kinds of utterances and can manage to help patients with their daily tasks and possibly provide interesting data for doctors
to study and analyze.
Built With
dart
flask
flutter
python
wit.ai
Try it out
github.com
memorai.herokuapp.com |
10,007 | https://devpost.com/software/agatha | Agatha
Inspiration
The Startup came together after the team ran in a hackathon and won a position in the innovation section of Pontifical Catholic University of Paraná. From then on, the group has created solutions to help everyone live in a connected and smart way.
The globe is currently facing a pandemic of enormous proportions that will change how people live, communicate, and interact with each other. In this context, Agatha, meaning
automated gastronomic assistant totally helpful and accessible
, was born. Through her, the group aims to not only make ordering at restaurants quicker, easier, and more fun, but also safer because of the reduced human contact, essential in the current situation.
What it does
Agatha will be available in the tables of the establishment either through QR codes or tablets. Through these means, the clients will be able to access the menu and self-order everything they need with Agatha. Her uniqueness comes from the fact that she will have a personality, this makes ordering not only easier, but also more interesting and fun. In addition, she will be able to recognize and communicate through sign language, increasing the restaurant's accessibility to nearly everyone. With this tech, the interaction with the waiters will reduce considerably, minimizing the probability of getting into contact with the Coronavirus, and reducing expenses for the establishment.
How we built it
Agatha was programmed in js, react.js, wit.ai and css.
We began by programming the Speech to Text and Text to Speech part, obtaining the functionality of the computer being able to understand what is spoken and accomplish a specific task. with this in mind, we noticed that Wit.ai would be an extremely useful application. Through Wit.ai, we were able to create not only a vocabulary for Agatha, but also a whole "context" for her to act in, akin to a real waiter.
With this, Wit.ai would recognize the intention of the user's input(be it spoken or written) and send it to our application. Within our application's code, there is a function containing a list of possible answers (chosen randomly) for each registed intent. With this, Agatha will answer the user appropriately.
In order to view the menu, the tab of each client, Agatha's image, the chat and the other features of the application, a program in css and react.js was created.
Challenges we ran into
During our project's development, we had difficulties with the language, since no one within our group had extensive experience with it. Structing the connection between Wit.ai and our application was challenging, since we had to develop two AIs, connecting them in such a way that one receives the client's intentions and commands, while the other analyses and separates each order correctly and systematically.
Accomplishments that we're proud of
it took us an amount of effort and racked brains to develop a program with ample functionalities which, after a certain amount of training, will be able to become something helpful and useful to a signficant group of places. Through the ample support and training we had, we could develop, in a way, an AI which is already recognizing the user's will.
What we learned
To get into the market we had to research about how important the food industry is to the world. After ample research and discussions, we understood that even this industry, which is billionaire, needs innovation. With this in mind, we started to work in an AI that would revolutionize the market. to accomplish this, we learned how to utilize tools such as react.js and css to build a website. Nowadays, just a website is not enough to get by on the environment of the connected world, so we also had to study other important tools, especially Wit.ai. With this tool, we understood how to connect it to our code, and how to work with two simultaneously.
What's next for Agatha
In the near future, the group plans to transform Agatha into a hologram. With this tech, ordering will be very unique, getting the attention of more customers, making their experience in the restaurant special.
Built With
react.js
wit.ai
Try it out
www.agatha.opfinds.com |
10,007 | https://devpost.com/software/grapevine-6phogb | Inspiration
You know that one recipe that you grandma has that she absolutely rocks , or maybe you uncle is the best story reader ever. Maybe you know a skill that is unique and fun. But would you really ever get employed for these abstract qualities ? In this quarantine a lot of people have been home bound and struggling to keep up with their finances , at a time like this we need to empower every member of the family who has a skill which can be marketed. Not only that as an employer you don't need to feel limited while being specific about the kind of employee you are looking for. You can make the most vague requests and we will still get you exactly what you need because we understand that you only get a 100% of what you seek when you look for it in your own words.
What it does
Our app is a new take on job seeking applications. Our app strives to give you exactly what you want no matter how vague your request might be. How do we do that ?
1)The app first stores the profile of the user based on his purpose, i.e: to be hired or hire someone (you can do both as well).
2)It then takes the user to a
Wit.ai
powered chat bot.
3) The chat bot interacts with the user to understand his needs and connects him with other app users who are suitable for them.
Using
Wit.ai
we gather information from user text regarding the skills they are looking for be it a 'nanny' or a 'gardener' or an 'expert cook' . But that isn't it we further allow our users to search for specific qualities like 'punctual' or 'kind'. We understand that sometime you want something very specific while making a very vague request and we are here to provide. After we gather the user request and extract skills and qualities from it we then match it to our users who are looking to get hired and then we connect them.
How we built it
We used flutter to build our app and firebase to store user information. We used Wit.ai to process the information our chat bot was receiving.
Challenges we ran into
Wit.ai was a new concept for us and we took some time to implement it.
Accomplishments that we're proud of
We were able to understand and implement NLP using Wit.ai.
What we learned
We learnt the application of nlp through wit.ai in terms of data extraction from a conversation.
What's next for Grapevine
We plan on implementing a local mode that will enable one to find jobs or people in your locality itself.
Built With
firebase
flutter
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/mrs-career-wise | GIF
Quick glance
Poster
Feature 1
Feature 2
Feature 3
Feature 4
Feature 5
Architecture
Home page
Ask Data science/Machine learning questions
Ask Data structure and Algorithm questions - 1
Ask Data structure and Algorithm questions - 1
Ask for tips :)
DevOps questions
Know about company's history, founders, culture, mission, interviewing process - 1.
Know about company's history, founders, culture, mission, interviewing process. - 2
Track progress - 1
Track progress - 2
Track progress - 3
Look out for opportunities.
Ask pay related questions
Download your personal data.
Note: Works best on Chrome
Demo -
Link1
,
Link2
Inspiration
The news of my friends losing out their jobs due to COVID-19 was heartbreaking.
Such posts flooded LinkedIn, highlighting how many companies have either fired their existing workers,
or revoked the offers of new hires.
About to graduate in these uncertain times, looking for jobs has become even more difficult.
It has become much more important to prepare thoroughly for your interview processes, as the competition rises stiffly with a growing rate of unemployment.
With hundreds of resources on the web, it can be overwhelming to pick the best one and get started with interview prep.
What it does
Mrs. Career Wise helps you prepare for your next interview at tech giants like Facebook, Microsoft, Google, Amazon for various roles like Software engineering, Testing, Product Management, etc.
Features
Prepare for leading Tech giants
Tech questions - Data structures, Data Science, Machine Learning, DevOps, Product Management ..
Interview tips.
Analytics - Keep track of your progress.
How I built it
1) With a pen and paper, try and list out all possible ways a user might interact with Mrs. Career wise. This helped me list the intents and entities.
2) Creating a Knowledge base of interview questions asked by various tech giants for different roles.
3) Creating a wit.ai python client.
4) Creating a flask server
5) Using Plotly.js to plot graphs for user's progress.
6) Deploying it over Glitch,Heroku
Challenges I ran into
The biggest knowledge was creating the Knowledge base. There is no API that provides such data, so I set out creating one of my own.
Also going all alone may not be the best idea.
Accomplishments that I'm proud of
I am glad I was able to create a product that would be immensely helpful to people.
What I learned
Using an NLP engine!
Chatbot design - handling intents, entities, contexts...
Creating a complete product from scratch
Caring for user's data privacy
What's next for Mrs. Career Wise
The next step would be to gather a lot of feedback from users.
Expand the Knowledge base to cover more types of questions.
Cover more types of jobs, not just Software-based.
Gamify the process.
Detailed progress monitoring.
Leaderboard - comparing with peers.
Built With
flask
glitch
heroku
jquery
plotly
python
wikipedia-api
wit.ai
Try it out
careerwise.glitch.me
career-wise.herokuapp.com |
10,007 | https://devpost.com/software/well-beings | WQS self assessment test layout
Media kit
Mind map
Messenger screen #1
Messenger screen #2
Inspiration
Mental disorders affect
one in four people
. Treatments are available, but nearly
two-thirds of people
with a known mental disorder never seek help from a health professional. The stigma around mental health is a big reason why people don’t get help. This needs to change. By changing the attitude towards mental health in a community setup, we believe we can create a domino effect of more people opening up as a result of increased social and sympathetic views on mental health.
Our Solution - Wellbeings: A Community
Wellbeings is a Mental Health Community. Unlike most mental health communities, Wellbeings is inclusive to even people that are unaware of mental health problems. This community is called Wellbeings because we want to de-stigmatize mental health.
Our solution to the problem is to provide access to vital information so that people can educate themselves on types of mental health problems, identify any warning signs by a quick self-assessment, information, and resources including helplines, advice on helping someone else, tips on wellbeing, etc.
We want this done in the most interactive way possible, which we believe we can achieve by creating a chatbot and a community that is synonymous with peer support groups. We want to focus on the idea that people with mental illnesses are not abnormal or some isolated group of people, but as many as 1 in 4 people in the world will be affected by mental disorders at some point in their lives. By creating a community, we want to reach out to the victims as well as the general public because they are likely to know someone who suffers from mental illness.
Collectively in a community setup, we harness a "me too" feeling and help members become advocates of mental health.
To sum up, we aim to
advocate the importance of mental wellbeing,
make information accessible and available,
tackle stigma,
empower community,
support people by aiding recovery through early identification & intervention.
Who are we?
We are a team of 4 people - which consists of a developer, a designer, and 2 doctors. All of us share a common vision to improve the intricate health system with the use of revolutionary technologies. Mental health is one of the issues we feel strongly about.
How we built it
Our messenger bot is powered by wit.ai to handle all the NLP tasks. The webhook managing all the backend logic and scoring is built with flask. For the bot flow, we have used Chatfeul. And for the self-diagnosis of disorder, we have used the WQS standardized test.
Challenges we ran into
Most of the people don't even know that they are suffering from some kind of mental distress, so they usually are not engaged with apps and bots marketed as self-diagnostic/help apps. To even reach to that naive user, we have taken a community approach to engaging him by providing a comfortable community which will be pictured by the user as the answers to his unknown problems. Once engaged, we can help him use our bot to take the assessment and know about his/her mental wee being.
Most of the self-diagnostic tests available are lengthy or too monotonous so implementing it in a bot is not a good experience and the drop out ratio of users becomes high. So our team of health professionals selected WQS from various different standardized tests and modified it to be more interactive and less negative to increase conversion ratio. Also, the questionnaire has around 50 questions but we have made it dynamic to the users are not given disorder-specific questions if his/her response is negative to the screening question. So for a normal user, the effective number of questions is around 15-20 which improves the number of people who complete the test.
What's next for Well Beings
We don't stop here. We aspire to engage as many people as we can and bring this to every person who is unknowingly possessed by this demon. We also want to educate the community about mental well being so that they can understand its importance and observe the silent cues of people in distress.
We plan on scaling this solution in the following ways
Incorporate health care professionals to help members with an accurate diagnosis
Add a database of country-wise helplines
Work on suicide prevention
Improve the self-help questionnaire
Make our bot even smarter (Thanks to wit.ai)
Incorporate CBT (Cognitive Behavioural Therapy) to assess and help people with mild symptoms here in our community only.
Built With
chatfuel
flask
glitch
messenger
wit.ai
Try it out
www.facebook.com
glitch.com
m.me
mm.tt
www.mindmeister.com
www.figma.com |
10,007 | https://devpost.com/software/covid-tracker-bot | Inspiration
As an effort to keep people’s awareness of how serious COVID-19 has became, we want to create an app that offers users quick access to the up-to-date number of cases (infected and death) caused by COVID. COVID Tracking API (Tracking API) is established and well maintained. Regardless of Tracking API’s great resource on COVID, there is not yet a quick way to obtain data from the API because of its vague parameters and the lack of an interface. Therefore, we are building a COVID Tracker Bot using Wit.ai to provide a user-friendly way for everyone to interact with the API.
What it does
COVID Tracker Bot provides a user-friendly interface and meaningful interaction for ExpDev07's
tracking API
, which provides up-to-date data about the world's COVID cases from three sources: John Hopkins University (JHU), CSBS, and New York Times. The chatbot app allows Facebook (FB) Messenger users to get instructions by typing
Hello
,
Get Started
, etc., and query COVID-19 information by typing in the country names and/or date. Once users send out an input, the bot would detect keywords (getting started, country name, date-time), get the right information based on the keywords, and respond to users correspondingly.
How we built it
The bot is written in Python using
FastAPI
. Integrated into FB Messenger, COVID Tracker Bot uses Wit.ai for user input analysis. We use Wit
intents
model to determine user's action, user needs help versus user needs information, and built-in entities,
wit/location
&
wit/datetime
, to obtain parameters needed for the Tracking API calls. The app is trained to mainly recognize countries and time.
In terms of architecture, the bot interacts with three external services: Wit.ai, FB Messenger, and Tracking API. First, after receiving the FB message through POST call, the chatbot feeds the raw text content to Wit.ai using Wit client. Secondly, the trained Wit model extracts the message’s intent and entities. Thirdly, the data received from Wit is used to obtain COVID data from the tracking API. Finally, the chatbot processes and returns the data by replying to the end-user.
Challenges we ran into
We faced lots of challenges trying to understand and troubleshoot the interaction between the chatbot and FB Messenger. In the beginning of the Facebook app setup, the payload received didn’t have a text field because of security reasons of an in-development app. We figured that adding test users helped solve this problem.
At one point during testing, the replies didn’t get back to end-users even though the bot receives multiple HTTP calls fom FB. After some researching and helps from the Facebook Online Hackathon community, we realized that FB messenger mechanism only allows
200
responses, categorizing any other responses as
500
, and leave the unprocessed messages in the queue. Thus, we implemented better logging and exception handlers to ensure that the chatbot always returns a
200
.
Wit.ai’s built-in entities helped us extract crucial data to get COVID cases from the Tracking API, which requires two-letter country codes. However, challenges arose when we tried to build the lookup's mapping system from country names to country codes. Wit doesn't refer to any documentation about resolved
wit/location
entity's values, e.g. "USA" is resolved into "United States of America" by Wit. Our solution was to run a dataset consisting of country names and codes in Wit to obtain the exact string value, and map them to the corresponding country codes.
Accomplishments that we're proud of
We learned about this Hackathon when it was just one week away from the submission date. We are proud that we promptly came up with the COVID Tracker Bot idea and were able to implement it, though none of our team members ever built a chatbot before. We consider our ability to learn and apply a great amount of new knowledge while doing this project a big accomplishment.
What we learned
We gained in-depth knowledge and hands-on experience in NLP, processing data, test-driven-development framework, and how to build a chatbot. Always assuming a very specific, well-designed chatbot framework is a must to make one, we have learned that a secured implementation can be as simple as having an API/microservices with
GET
and
POST
endpoints, especially with the help of Wit.ai and Facebook Messenger authorization protocol.
What's next for COVID Tracker Bot
For now, the bot only supports John Hopkins University’s data and process input with country names and/or a specific date. We’d love to add the other two sources (CSBS and New York Times) in the future, which provides information on US states and regions, as well as include more functionality to process interval dates/multiple time values.
At the moment, our processing time is a bit slow; it is taken up by the bot’s calls to Tracking API. The delay might be even longer (up to 30 seconds) if the data isn’t cached in the Tracking API. Therefore, instead of implementing the chatbot as an app separately, it can be an extended feature in the currently open sourced tracking API (a.k.a. a bunch of additional modules and two more endpoints). The transition is feasible since both the bot and the API implements FastAPI. Lastly, our main goal for next steps would be scaling and expanding the bot, specifically using Facebook Messenger Quick Reply to train Wit.ai and integrating the bot into other social media platforms like Twitter.
Built With
chatbot
fastapi
heroku
messenger
python
tdd
wit.ai
Try it out
www.facebook.com
github.com |
10,007 | https://devpost.com/software/spydergramai | Inspiration
What it does
SpyderGramAI is a web scraping tool that collects and collates images and videos of Instagram content.
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for SpyderGramAI
Try it out
bitbucket.org |
10,007 | https://devpost.com/software/stigmatized-lekds7 | Inspiration
This was inspired as a result of recent Facebook social media trend in my country, Nigeria where rape survivors found it difficult to come out publicly due to the stigma and guilt they face as a result of the experience.
What it does Stigmatized do
Stigmatized is dedicated to helping people of prior sexual assault. The Facebook Messenger Chat Bot is intended to identify with rape survivors and converse with them in a friendly tone, it recommend steps from the helpguide.org to help them to recover from any trauma that follows and overcome the guilt that follows and pursue legal action.
How I built it
I used a Node.js back-end along with a Heroku server to implement our Facebook Messenger Chatbot. Then I employed Facebook Wit.ai NLP to process user input and provide adequate response.
Challenges I ran into
I had issues deploying to the heroku server, Time was also a factor and considering my country Nigeria power supply and internet connectivity was a big challenge too.
Accomplishments that I'm proud of
I was able to implement a chat bot for something that I'm so passionate about.
What I learned
I came in to this hackathon with no experience using Wit.ai or even app server deployment. This has exposed me and even though I cannot say I'm very good at it now, I believe it is a great step and what I have learnt on Natural Language Processing, I'm sure the coming years will be a busy one with NLP in mind.
What's next for Stigmatized
Stigmatized is far from perfect, being someone with a quest for humanitarian activities, I intend to use it to bring that to actuality but that cannot be done if the idea is not fully established.
Built With
express.js
heroku
javascript
node.js
wit.ai
Try it out
www.messenger.com |
10,007 | https://devpost.com/software/witty-walk-with-me | The Proud Pic
Playing around with Wit.ai
Starting the hack!
Finishing touches applied!
DONE!
Proud Final Pics
Yeah Yeah sometime it did listen wrong! BUT worked most of the time!
Inspiration
I got the inspiration to develop a smart walking start from my relatives and my father gave me the idea and mum helped in building the walking stick. This was really fun. And my first Devpost & FB Hackathon.
A Walking stick built for the people who are alone.
A quick overview regarding people living alone.
Ex.
If, when people around you hear you saying "Hey my heart is paining badly!" they will surely rush you to the nearest hospital or at-least call your nearest/closest friend/neighbor/relative, and done you are almost saved!
But now think what if you were alone in this situation, and what if that was a
sign of a Heart Attack
. You are probably dead by the time someone realizes that there is no movement from this house for a long time!
And this is where my "Witty" comes to scene! Witty intelligently understands the users distress calls and triggers a event like sms or call as per configuration.
Some document on the web supporting this case
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6199841/
UN -Sustainable Goals
My project adheres to the third goal of sustainable development. Ensuring healthy lives and well being of the end-user.
Ensure healthy lives and promote well-being for all, at all ages.
Using Wit.ai in Health Care can bring in more opportunities to understand a remote patient in need.
In this time of COVID-19 pandemic, people cant visit each other or talk with anyone in person. If bots built with services like WIt.ai can be deployed in healthcare will benefit a lot for the the aged, alone and maybe kids too.
Problems faced by the “elderly staying alone”
What it does (current stage)
Listens to users distress sounds.
Creates alarms based on different distress voices. (sends msgs)
How we built it
Wit.ai (FB NLP API). It was very satisfying to see a actual NLP service working in my project. I trained the Wit.ai with many negative help regarding statements.
Twilio for alerts (sms alerts). Applied a simple if-else logic to trigger events.
We triggered a sms event from twilio when a negative statement was detected from Wit.ai trained model! 😍
Challenges we ran into
I faced a ton of challenges in this project.
Everything from starting to end was a challenge cause it was new. Like finding wake word detection library but then Jetson Nano wouldn't support some of them.
Firstly due to lock-down I couldn't get cheaper and lightweight hardware which could be used for the project purpose, so had to stick with a huge "Jetson Nano" which I got in a old idea submission contest and hardware accessories in India is very costly, that development is always hindered in some way!
Microphone battery died half way had to wait a lot but then found the a good old camera's microphone working good.
Am really new to developing a whole project involving API's so it took some time to figure out code and making it safer!
I messed around with the project logic a million times, I had to go on changing the way the idea had to be implemented.
Everyday the main code would change!
Accomplishments that we're proud of
My very first hackathon project
Feel in love with Wit.ai (now am gonna use it as a bot service for new projects)
Got to explore more using Python
Tried Twilio (I had seen people sending sms from devices but didn't know that this was easy)
I feel confident now to participate in other public hackathons.
What I learned
Python intermediate
Using API services in Python
Using Twilio
Wit.ai
What's next for Witty Walk With Me
PyTorch CV for surrounding environment detection.
Fold-able wheels attached under the stand so that it can move around the house also pull the user to a specific path just like a autonomous car. (highly optional because I can think of the dangers and ways it can go wrong if not properly implemented)
Ultrasonic based object detection so that it can work without light too.
Use lighter board such as RPI Nano and more sensitive microphone, make the whole project smaller and safe for kids too.
Planning a hard shock proof case.
Audio device pair-able for voice based interaction.
I have a very good feeling that bots like "Witty" will be needed a lot in houses where people live alone with no one to assist. Not just home's, Wit.ai unlocked the possibilities of NLP and voice based add-ons in every other gadget you can think of. Voice interface is the future and its here.
Some understandable demo code
import json
json_file = open('keys.json')
json_data = json.load(json_file)
keys = json_data["api_keys"]
access_token = keys[:][0]['wit']
account_sid = keys[:][0]['tsid']
auth_token = keys[:][0]['tauth']
from wit import Wit
client = Wit(access_token)
resp = None
with open('temp.wav', 'rb') as f:
resp = client.speech(f, {'Content-Type': 'audio/wav'})
text = resp['text']
trait_confidence = resp['traits']['help'][0]['confidence']
trait_value = resp['traits']['help'][0]['value']
from twilio.rest import Client
T_MSG = "Hi, you have a emergency msg from your dear! -> '"+text+"!'"
client = Client(account_sid, auth_token)
if trait_value=='call' and trait_confidence>=0.75:
message = client.messages.create( to="+918237842347", from_="+12245076842", body=T_MSG)
Built With
jetson-nano
python
pytorch
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/wit-covid | Cover Pic
Information Retrieval
How to use
Bye Bye
Inspiration
My major inspiration is to help the needy in this pandemic situation. In my country like India, many of the poor people don't have proper knowledge of websites, or even access to televisions. Due to this, many of the information regarding the precautions, nearest hospital facility, etc. never reach these people, and due to that many people are dying on the streets just because of either they don't visit proper health facility in time, or which they visit, doesn't have space. But one thing people have is the smartphone, no matter cheap or expensive. And, one app I see people using frequently is Facebook (one of major impact by the company in connecting people). So, I and my teammate, decided to build a bot in Fb Messenger itself, so people can come in contact with this often, and just don't have to learn and search throughout the websites or streets, and all the information are easily accessible with just one question away.
What it does
This bot tells you about the current cases, deaths, and recoveries from COVID-19 countrywise and also statewise but only limited to India, for now atleast. Also, one of the key features is that, if you have symptoms or if you feel you have symptoms, this bot can lead you to various helpline numbers so that help reaches immediately if any serious case is encountered. If your health is deteriorating, this bot also provides information about various hospitals in Indian states treating COVID-19 patients, and also tells you if hospitals have space or not (this solely depends on state), so that you reach the right hospital at right time.
How I built it
We used nodejs, messenger api, and wit.ai, and made them communicate with each other. Questions received from messenger ai were sent to wit.ai for NLP synthesis, and data processed was parsed to nodejs for further actions. After actions, the result was reverted back to messenger api as their possible answer.
We used different public websites to scrape information and accelerated their goal of transferring useful information and resources to the people.
Challenges I ran into
The major challenges were caused while collecting data especially live feed data. Collecting this type of data was more important for better analysis of the current scenario. Another major challenges were to extract information from wit's response. For different sentences, wit returns different json structure, for eg., wit identifies some locations but isn't able to resolve it. For handling such scenarios, each possible sentence was carefully tested using Wit in Python. Also, identifying and catching bugs in the questions asked was by far the toughest challenge to tackle. Due to greater efficiency of Wit, it extracted useful information and treated the question with some other similar question's answer. Also, we had some minute issues in communicating with the websites, that was later resolved quickly.
Accomplishments that I'm proud of
I'm proud to build something useful not for the proper earning, educated people, but for the people those don't have access to many priviledges. Through this hackthon, I got an opportunity to build something useful for the community.
What I learned
Through this hackathon, I feel obliged to get introduced to Wit.ai. It is such a nice framework with much accuracy and was impressed by keyword detection and also it predicts the context with much greater accuracy. Also, with this project, I learnt how to detect points or fields to target, or which needs to be addressed to bring a massive change in the community for its betterment.
What's next for Wit COVID
If you think with broader perspective, there is still an important feature missing i.e. identifying COVID hotspots or sealed areas near our houses. We were working on this feature, but due to lack of time, we decided to put a hold on this. Also, we weren't able to add various NGOs working for COVID relief, as these organizations are the fastest to reach an infected person or area, and pay proper medical attention before the actual medical facilities arrive.
Built With
facebook-messenger
glitch
javascript
node.js
wit.ai
Try it out
m.me |
10,007 | https://devpost.com/software/schemeai | ext/Voice enabled bot serving the schemes to the public using wit.ai
Inspiration-
I am inspired by the challenges faced by the public during the lockdown where they are returning to the hometown and suffering from the challenges they are currently facing.
My application serves the purpose of helping the citizens of my country to access the schemes provided by the government.
This will solve the problem of poverty and economy of the country as it will give many opportunities to the people in various sectors and helping each one to access the benefits.
What it does-
It will answers automatically to the questions asked by the public about the schemes.
Featuring voice and text enabled to ease the access of the application.
Using the feature of natural language processing it will serve the purpose of hitting awareness about various opportunities under various sectors according to their interest and clustered in similar entity.
How I built it
i built is using python with the wit.ai feature provided by the facebook for the natural language processing.
also using speech and message feature of wit.ai helped me a lot for accessing the voice and text messages and could generate a template form of message that will enable to the user with the great data about the schemes.
Challenges I ran into
i ran into a lot of challenges because i was having no knowledge regarding facebook development and wit.ai .
i just have knowledge of python and i learnt from scratch about how to learn to about bot app and wit.ai.
i devoted alot of time in learning then building applications.
I struggled with the unknown bugs for a lot of time but i managed becaused i invested my whole day for the development of the project.
Accomplishments that I'm proud of
i am very happy and proud to become a facebook app developer in less time after learning from scratch and also eligible to say that i am skilled enough to contribute my skills for the growth of the community
What I learned
What's next for SchemeAi
i will be adding a lot of features after learning on the availabilty of it.
Various langauge support
video enabled and image enabled message conversation.
enhanced templated message.
More accurate data for the public
Notification Reminder messages feature
Built With
atom
audio
bot
facebook
facebook-messenger
flask
heroku
http
python
wit.ai
Try it out
www.facebook.com |
10,007 | https://devpost.com/software/open-accessibility | Check-Accessibility
Getting info
Adding info
Inspiration
No one can deny that there is plenty of information for general public to plan their day and visit restaurants, museums and go to work , however accessibility information for people living with disabilities is still extremely limited. Websites like Yelp and similar reduce the concept of "accessibility" to a binary selection (Wheelchair accessible or not). Unfortunately this doesn't help the majority of people stranded at home with different disabilities. For example, visually or hearing impaired does not get any help from the current categories. Many people are left having to call or email and that can be discouraging and a way to block them from integrating into society. We believe that technology, in this case AI applied to an assistant, have the potential to make life better for a lot of people.
What it does
Accessibility-Check bot have 2 main functions, the first one is to organize a coordinated crowdsourcing effort to map restaurant, store , offices, etc. It basically shoot a quick set of questions like "Do you see a ramp?", "Can you ask for a Braile menu?" , "Is the bathroom in the first floor?", etc. All these questions are based on American’s with Disability Act (ADA) and Open travel alliance (OTA).
The second functionality of this bot is to inform the general public about the accessibility based on their location. A mechanism will be included to catch unanswered queries, save them in a database and add them to the crowdsource queue.
Stack we'll be using: WIT.AI (NLP), Yelp, GoogleAPI's (to map locations), React (Frontend), Node.js + Express , GraphQL/AWS Appsync (DB, storage).
Github Repo:
https://github.com/six100/accessbot
This project is still in a very early stage and open to new ideas, your ideas. Contact me if you are interested in joining forces.
Looking for:
Node, Express experts.
API Guru (To connect 3rd party Geo-location API's, other APIs)
ML, NLP Jedi (To take it to the next level)
Anyone that want to be part of this.
Objectives:
To make a solution that solves a problem in the real world.
To find the best, leanest, most effective tech solution for all the challenges ahead of us.
To meet great people.
To Open Source the code at the end.
Thanks!
Built With
facebook-messenger
graphql
node.js
react
wit.ai
Try it out
accessbot.chat |
10,007 | https://devpost.com/software/a-2ukwsq | Inspiration
As many stores start to reopen, social distancing becomes more important to contain the spread of Covid-19. We will build an application that will allow for users to book slots for their favorite stores based on the number of users at any particular time that will be present at the store. If the store is full for a slot, the user can view other available slots and book them.
By open, we mean they can accommodate more people.
What it does
The chatbot collects information about the number of people allowed per slot. Then, it uses the location of the user to find nearby stores with open slots for a specific need (such as an item users are looking for). After listing the nearby open stores, the bot allows users to choose to view the number of available slots, most popular items in the stores. Then, user will receive a code as a booking confirmation if he/she confirms booking slot. Users can use that code or send a message to store's website/page or come to the store with the code so that the stores can update available slots.
With this, the bot gives a live count of people inside the store. If the user is a business owner,
How we built it
We built it with: Express Node.js backend, Sequelize ORM and Postgres database. It's hosted on Heroku.
We also use Facebook Messenger Platform
Challenges we ran into
Accomplishments that we're proud of
What we learned
What's next for a
Built With
express.js
heroku
node.js
postgresql
sequelize-orm
Try it out
github.com
m.me |
10,007 | https://devpost.com/software/days | Home Page
Metrics Page
Metrics Page
Our Inspiration
Our project started with one question. How can we help? Today, AI and NLP are commonly used to streamline lives and simplify tasks through smart assistants and chatbots. With such powerful tools, it was obvious to us that there was so much potential and practicality for a project.
After days of brainstorming, we pinpointed a consistent pain-point in our friends and family’s lives, as well as our own: time always feels like a rare resource. We decided to work on Smarter Days with the
main goal of improving the lifestyle of people who use the app
.
What It Does
Users would log their day-to-day activities and receive visual breakdowns of their activities over the course of days, weeks, or months.
This will enable people to gain valuable insight into how much time they spend per activity which can have a great positive impact on time and life management.
How We Built It
Our project started by enabling our Wit.ai model to recognize and sort different types of user activities. We created intents and entities for working, exercising, studying, and resting activities and training the different ways for phrasing them.
We then built a full-stack web application to house the model and provide a user interface for users to interact with the model.
Technologies
AI/Machine Learning
:
Wit.ai
Web Application
:
MongoDB, Express.js, React.js, Node.js
Google Firebase for SPA hosting
Heroku for Node.js (backend) hosting
Challenges and Solutions
Challenge
: Due to external circumstances, our project was entirely virtual making coordination tough
Solution
: We adopted Scrum strategies and engaged in daily stand-ups as well as goal/task planning.
Challenge
: Considering the nature of Machine Learning, it’s difficult to train for multiple categories to have intended outputs at the same time. Especially true when considering our project bandwidth (
1 month
)
Solution
: We took a methodological approach to training and created various Word and Excel documents to randomize words and follow the modeling process. This saved us an incredible amount of time and allowed us to comprehensively train for even the edge cases.
Accomplishments
Training a comprehensive Wit.ai model to recognize a wide range of entries
Creating a full-stack application from the ground up
Consistent virtual coordination and communication over the course of the project
What We Learned
Natural Language Processing techniques and concepts
Making use of Wit.ai models and training tools
Full-stack development
What's Next For Days
Continuous learning via phrase validation from the user (correct/incorrect validation options)
Built With
express.js
firebase
heroku
mongodb
node.js
react
wit.ai
Try it out
smarter-days.web.app
github.com |
10,007 | https://devpost.com/software/wkend-home-maintenance |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Wkend chat!
Wit App ID: 328662241461519
Inspiration
Will has long talked about a management system for the home, a seamless way to keep on top of building maintenance and create a record of work done. Together, we came up with a chat-based solution for keeping track of work done, work to do, and even recommendations for your home. We think there's a lot of potential in scheduling, locating contractors, and keeping a historical building record!
What it does
When you first start using Wkend, you'll be asked to give your home a name and describe it a little bit. Using Wit.ai, we were able to parse those attributes and begin creating a record for the home. From there, you can tell Wkend tasks you need to do regularly, or reference when they were last done.
How we built it
Our first decision was to keep this web based for ease of accessibility. We turned to Next.js and ANT for our UI, and FastAPI for our server. After doing a bit of training in Wit.ai, we were able to set up simple endpoints for handling text or speech and returning responses to the user. One neat integration we were able to take advantage of is the text-to-speech browser api now widely available, which coupled with Wit.ai allowed for a full speech interface. For authorization we integrated Auth0, and in order to save the user's data we set up Hasura with Postgresql. This combination allowed for a slick web application and a fully featured backend.
Challenges we ran into
We ran into so many challenges! Recording in different browser contexts, sending speech snippets to FastAPI, trying to launch the Next.js app on AWS lambda... the list goes on. We had high hopes for completing a fully fleshed out application, but at the end of the day we were both excited to take our first stab at chat/voice enabled app.
Perhaps one of the more surprising challenges has been comprehending the types of interactions a chat-based interface entails. How do we make it clear to a user the actions available? How do we handle different chains of thought that end up at the same result?
Accomplishments that we're proud of
A few of our accomplishments were launching FastAPI on lambda, getting both sides of the voice interaction to work, and overall architecting a multifaceted application. The combination of libraries we arrived at seems very versatile, and we're excited to push Wkend further.
What we learned
Don't try too many new technologies at once! Oh my goodness, we overwhelmed ourselves and were not able to get to a super cohesive application. That's not to say we have any immediate regrets, learning to use many of these new tools such as Next.js and Wit.ai has made it all worthwhile.
What's next for Wkend
We're going to continue developing Wkend until it can be properly deployed, tested out in the real world, and then shared with a few friends we know who are interested in using it. There are still a few key features we would like to get to, such as searching for contractors or providing recommended cost-saving tasks. We'd also like to provide Alexa or Google Home integration so interactions could be more passive--the goal here really is simplifying staying on top of house work and keeping a record of it.
Thanks for the opportunity to be part of this Hackathon!
Built With
fastapi
hasura
lambda
next.js
postgresql
wit.ai
Try it out
github.com
test.wkend.work |
10,007 | https://devpost.com/software/heybro-programmers-personal-assistant | 6. Execute a project
3. Project analysis
4. Select another project
1. Select a workspace
2. Select a project
7. Exit screen
5. Open a project
Inspiration
Command-line interfaces are a hassle. It just tests our memory instead of logic. As programming professionals, we always felt there should be someone who helps us with all other mundane tasks like opening, running, analyzing projects, etc, so we can focus on the logic. This is where HeyBro comes into picture!
What it does
HeyBro is an intelligent personal assistant who resides in the command line, and we can ask it to do mundane tasks. There are no particular commands like a typical common line interface. Instead, just type in whatever you want to do, and it utilizes Wit.ai's NLP processing to identify what user intents and executes commands to fulfill the needs. Simply put, type in "run this project" or "execute it" or "can you start this" and code will be executed. Similarly "give me an analysis of this project" or "examine it" will do an examination of the project. Basically ask HeyBro what is your need and get it done.
How I built it
We've created a command-line interface with JavaScript utilizing the capabilities of NodeJS. An npm package named Enquirer is utilized to take inputs.
Challenges I ran into
Running some commands with NodeJS was tricky. Like running a project. A great amount of time was spent on researching this. And there were lots of edge cases like there tons of programming languages and libraries and supporting everything requires hectic effort. But we limited our scope for the first version to tackle this.
Accomplishments that I'm proud of
HeyBro can now do basic analysis, open, and run (NodeJS at the moment, but it is easily extendible to other languages) on projects. Wit.ai's NLP capabilities are so impressive as training our model is such a breeze. Above all this is a project that we, as programmers will use on day to day basis from now on. And we are pretty sure that with some optimization, this project can take out many repetitive tasks from our schedule, which will help us focus on things that really matter.
What I learned
NLP has a huge potential to make our life easier, and imagination is the limit. We will try to integrate NLP wherever possible to ease interactions from now on.
What's next for HeyBro - Programmers Personal Assistant
Automatic project language detection with support to more languages
Improve project analysis
Linked commands. Eg: we can just tell HeyBro to "add a commit message 1234fixed and push to master" and HeyBro will detect it is a git command, switch to master, perform a
git add .
and
git commit -m "1234fixed"
then push it to remote.
More useful traditional command-line replacement commands, like "rename this file abc.txt to 123.txt"
More activities, by analyzing user queries
Built With
javascript
node.js
Try it out
github.com |
10,007 | https://devpost.com/software/messages-with-hope | working figure of chatbot
Inspiration
To this day, there remains no constructive, operative, and practical method that efficiently helps in emergencies to deliver critical patients to hospitals.
The access to ambulances and the reaching of ambulances on the location of patients have been time-consuming. Not only that but, also the process to register an ambulance can take approximately 10 to 20 minutes for a person. This happens because the user has to explain the route to the ambulance driver, providing details to the ambulance call center. These formalities take a lot of time, which can be fatal for critical patients reaching the hospital on time.
What it does
Messages With Hope; a helpful chatbot, acting exactly as its name suggests when it comes to registering an ambulance and making contact easier between ambulance service and the user. Where the whole process is automated, you don't have to worry about any external equipment as it is just available on your mobile phones. All you have to do is message in the chat box, provide the address in text or pin location, provide other necessary information for ambulance service and you won't feel troubled about getting an ambulance to your doorstep. From there you can instruct the ambulance driver which hospital you chose as the destination from the provided nearest hospitals. It is time-efficient, easy, and reliable.
Innovation
Currently, no such chatbot exists which provides such service. Also, the Wit.AI platform was used for understanding entities in the conversation and integrated it with Google Maps to provide the user with different routes to different hospitals making it easier for him to choose the hospital especially if the user is new or traveling in that area. Also, the ambulance service can have the exact location of the user helping in reaching the ambulance in the minimum time possible.
What we learned
It was quite interesting to work with Facebook technologies, especially with Wit.AI. Making different conversation cycles have enabled to understand how should chatbots be developed in the long run.
Impact
In Pakistan, the ambulance service systems are quite unreliable. The number of hospitals in Pakistan is limited. Also, the process of calling an ambulance and registering and contacting the ambulance driver to help him to reach the location takes up a considerable amount of time. Using our solution we tend to reduce that time as much as possible and provide the user as much knowledge and information on the spot to assist the user in this emergency and help in saving a person's life.
Built With
facebook-messenger
google-maps
google-places
node.js
wit.ai
Try it out
m.me
wit.ai |
10,007 | https://devpost.com/software/addmision-helper-0ikgzl | our logo in test group
Inspiration
We realized that it's time to automate this area of life. During the creation process, we discovered that there are no similar work projects and this is a good opportunity for us to develop.
What it does
Answers to specific sections are prepared in the form of standard chatbot buttons, and you can also immediately ask a question without going to the menu. But also, a feature that is not related to AI is tracking the rating of students. A person does not have to constantly search nervously for this list and their place. the bot will immediately give out its position.
How I built it
It built with python
Challenges I ran into
How can I work with words and correct answers
Accomplishments that I'm proud of
It work!
What I learned
How to prepare words for search and correct mistakes, work with tables and machine learning
What's next for admission-helper
We should try to launch it in our University
For Facebook, this is a great opportunity to increase interest in their platform by making a link between universities and the ability to connect educational institutions to this program.
P.S. We have no time to translate in English and learn some rules for admission. I think the general idea is clear.
Built With
keras
python
tensorflow
vk
Try it out
github.com |
10,007 | https://devpost.com/software/old-is-gold | Inspiration
Honestly one of my best friends 87 year old father was recently diagnosed positive for COVID19. Sitting almost thousands of miles away he had no way to ensure his father, a diabetic had even basis emergency response to queries (not talking of treatment here just basic query and response to that simple question which is so difficult to find)
An app with a heart (bot that cares for elderly senior citizens) for virtually caregiving senior citizens adequate care and solace amidst trying COVID19 times when they feel lost for purpose or alarmed by increasing spread of the virus.
Like Alfred the butler in a famous DC Super hero magazine (our team doesn’t intend to take any credit for DC creations and acknowledges all copyrights owned by DC) this app bot is a voice butler providing interesting and informative responses to key queries around covid19 and ways to mitigate its treatment
Oft ignored and marginalised due to age factors are the elderly folks staying alone amidst the COVID19 pandemic crisis. Not at all easy and a harsh reality of our times..
Hence we are inspired to call ourselves OLD IS GOLD...
What it does
An interactive NLP bot that answers all health related queries around COVID and connects the elderly people with immediate directions, solutions and clarifications
How I (no we) built it
We built it using wit.ai and performed a few use cases or prototypes around integrating wit.ai with a mobile app or Facebook..
Challenges I (no we) ran into
We found integration into Facebook and apps a bit challenging given time constraints not our technology restraint.
Then maki g a compelling storyline which powerfully conveyed our USPl
Accomplishments that I'm proud of (for my rock star team)
We completed the end to end workflow is like less than 16 hours. Period. That’s something we believe our grandma will celebrate baking home made cookies, once she sees our prototype and how safe it makes them feel.
What I (we) learned
Stitch in time saves none, not just in terms of our time for development integration and end to end testing. We are referring to the fact that a query well answered by our bot could end up saving the life of an elderly patient
What's next for Old is Gold
We hope to finish among top of the charts and seek Facebook help in getting required guidance for integration and testing.
God speed Good luck and as we say in India JAI HO
Built With
wit.ai |
10,007 | https://devpost.com/software/gratitude-genie | Splash Screen
Auth Screen
Gratitude Journal Screen
Mood and Journal Streak Tracker
Gratitude List
Settings Page
Inspiration
2020 has definitely not been a year to remember. The COVID-19 pandemic, deaths, lockdowns, job losses, racial violence and just recently there was the news of a popular Indian Bollywood actor (Sushant Singh Rajput) who committed suicide (which was very disturbing)—it's like COVID started as a spark and now it has turned into a forest fire of negativity.
With everything seeming so grim right now and seeming completely out of control, I felt like there was a need to bring an element of gratitude back into our everyday lives. A reminder to count our little blessings. Just a tiny drop of positivity amidst the forest fire could do our mental health a lot of good.
I think Dumbledore put it way better than I ever could—
"Happiness can be found even in the darkest of times if one only remembers to turn on the light."
That was my inspiration behind taking some time out to build Gratitude Genie. :)
What it does
Gratitude Genie is a conversational everyday gratitude journal. Following are its key features—
Conversation Journal that makes your journal experience fun and engaging
Inspiration
Timely reminders
Journal Streak Count
User Mood Tracker
Save Gratitude List
Beautiful wallpapers updated dynamically
How I built it
I have built this application using React Native. I started by thinking about the bottlenecks that prevent you from journaling on a daily basis. Then, it came down to problem-solving and forming the feature requirements.
After that, I had to plan the conversational flow. How could Gratitude Genie help on days when you just didn't feel grateful for?
Chaining is something I picked up from the book "Atomic Habits" and so, I implemented a quick journal streak counter and mood tracker. This can be helpful in maintaining accountability for the way you feel on a daily basis and help you form long-lasting habits.
Wallpapers were added to introduce some spice and a separate gratitude list was added so users can reminisce about their past victories/ memories from time to time.
That's how I ended up building Gratitude Genie. And of course, caffeine helped a great deal as well ;)
Challenges I ran into
I wanted to build for both iOS and Android. So, it came down to React Native vs Flutter. I went ahead with React Native considering the time deadline and the fact that I was coming from a web background with no prior experience in mobile app development.
Also, I fist had plans of making it a messenger bot. But, ensuring user privacy was a challenging aspect there. The standalone app, even though time-consuming to build offered a lot more flexibility in terms of features.
Accomplishments that I'm proud of
This was my first mobile application and.. my first hackathon as well. I like the fact that I've been able to develop it for both Android and iOS devices and that it is in the mental wellness category, which means it can have a significant impact on people when executed well. If this app can improve the mental health of even a single user on a day to day basis, that would be the accomplishment I'd be most proud of.
What I learned
From the technical side, I got to learn a lot about mobile app development. I got well versed in React Native and other frameworks. This was the first time I got to be both the product manager and the developer. Another important learning was that ideas don't really work unless you do the work. You can ideate a lot of features, but none of that manifest into reality until you turn it into code.
What's next for Gratitude Genie
Get feedback from early users.
Make the conversations more personal and engaging.
Think about user privacy and implementing anonymous features where people are able to talk openly about depression etc.
Can also implement analytics to track user's mood over a timeline
Can add Facebook Sign in as well to Google
Built With
asyncstorage
expo.io
react-native
react-native-gifted-chat
redux-persist
ui-kitten
unsplash
wit.ai
Try it out
github.com
balloffocus.life
drive.google.com |
10,007 | https://devpost.com/software/ponder-tiw5y3 | Overview of main screens
Final screens (top: adding entry and user profile, bottom: archive)
Final high-fidelity mockups
Initial low-fidelity sketches
Inspiration
The coronavirus pandemic has taken a toll on people’s mental health around the world. Millions of people are isolated from a support system, and with the priorities of our day to day activities changed, it’s not uncommon for many to have feelings of anxiety and uncertainty.
Research has shown that journaling helps people improve their mental health as it’s a way for them to regain control over their emotions. To help facilitate journaling with additional support, we integrated with Wit.ai to use AI to recommend resources that would help users feel supported based on what they journaled.
What it does
Ponder is a journal that uses Wit.ai to recommend various articles and resources specifically tailored towards the user's journal entry for the day. This helps the user reflect deeper on their emotions and help facilitate beneficial changes to their lifestyle. Each journal entry is paired with a related article, and these pairs can be archived for easy access in the future. The user can look at their history to see their progress, and we also provide a virtual plant to illustrate personal "growth" by measuring their app usage.
How we built it
We first used Figma to visually prototype our app, working from sketches and wireframes. We then used Dart and Flutter to construct the front-end of the app, as well as MongoDB for database storage, Mongoose for the REST API, and Wit.ai for sentiment analysis.
Challenges we ran into
Since we are a large team and we were working on integrating many different frameworks, we ran into the major challenge of learning new languages and frameworks in a short amount of time.
The design team had to learn how to integrate a plethora of different ideas into one prototype that could also be implemented by the rest of our team.
For the frontend team, we both picked up Dart and Flutter for the first time. After learning the ropes, one major challenge we bumped into was configuring the navigation bar to allow for in-page routing.
For the backend, a major challenge was learning how to leverage Wit.AI to produce relevant results.
Accomplishments that we're proud of
We worked extremely well as a remotely organized team, from ideation to submission. Each person on the team was always willing to chip in where needed and took care of their own deliverables as well.
Additionally, a majority of our team had never used any of the tools listed for creating this product. We are proud of how much we have all learned during this endeavor and of our ability to adapt to use new techniques and resources.
What we learned
The frontend team learned how to use Flutter and Dart, experimenting with mobile development for the first time. The backend team learned how to integrate Wit.ai into the multiple technical components that were being used, as well as how to query in Dart. On the design side, we learned how to organize and maintain a large team remotely, as well as how to shorten but efficiently do the UX research portion.
What's next for Ponder
We hope to allow Ponder to support a wider variety of media (videos, research journals, etc.). Another area of focus is to accommodate for a larger range of journaled situations and topics.
Backend code can be viewed here:
https://github.com/angelina124/ponder-api.git
. Since it is run through Heroku, downloading the backend code isn't necessary.
The APK files can be found here:
https://github.com/jonnachen/ponder_front/tree/master/apk
Built With
dart
figma
flutter
javascript
mongodb
mongoose
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/travel-partner-3z0abl | Inspiration
Being a citizen in one of the busiest countries, it is a daily challenge to commute to your destination on time. Public transport directions and signage are disorderly and confusing sometimes and properly organized transport data is hard to find. The average person in travelling a new city loses about 3 hours a day commuting. We want to help optimize this and make sense of the chaos.
Imagine if we want to take a taxi, we need to open the Uber app. If we want to take a bus, we need to open CitiMapper. If your wallet does not have enough cash, you need to open Google Map to find the closest ATM. Imagine after completing all these searches and queries across multiple apps, our lives might have already lost at least 5 minutes (1-2 minutes per app). In some serious circumstances, we could even miss a date.
What it does
Travel Partner is Waze for public transport. We are a travel and navigation chatbot that solves the problem of travelling by providing intelligent commuting directions and route analysis for all countries.
Travel Partner is a bot who recommends transit route from Point A to Point B by any means of transport.
tells you when the next public transport will arrive
tells you how much time you will need to commute
shows you the closest parking area.
helps you discover point of interests around a certain location
just LOVE TRAVELLING
How I built it
We started this project around one month before the deadline and each of us are responsible for different tasks. One is responsible for coming up bunches of utterances and entity keywords that teaches Travel Partner with the help of Wit.AI. One focuses on building the backend infrastructure. One focuses on integrating the bot with Facebook Messenger and one coordinates and manages the team to make sure the project runs smoothly.
We used Python as the programming language, Wit.ai for Natural Language Processing, HERE API as the data source, Redis for session caching and Facebook Messenger as the bot interface.
Challenges I ran into
The biggest challenge for us is to write a bot because none of us has previous work experience. In addition, we also set high expectation on the bot to make sure it does not look like a traditional rule base FAQ chatbot. At the beginning, we struggled on caching information within a user session by creating multiple threads and variables. Until then, we figured out that it was a lot simpler to use an external in-memory database to store the information.
Another challenge was not from the technical perspective, but on how to compromise each other’s idea as a team. Right at the beginning of the project, we spent 5 hours sitting in a room brainstorming ideas on how to implement a chatbot that helps people’s lives. From writing bots that checks the amount of litter in a rubbish bin to bots that recommend products from online stores, we assess our project idea not only based on the judging criteria, but also on whether the product is usable and sustainable, technically feasibility, last but not least our time availability.
Accomplishments that I'm proud of
We built a travel companion from scratch as a team that could help us and other busy man to save time.
What I learned
From technical perspective, we learnt that it has become a lot easier to apply AI to solve problems with the help of many pre-trained model. This hackathon also strengthens our skills on Python and the concept of Multi-Threading (though we did not apply it to bot).
Last but not least, we learnt about teamwork. We learnt how to make compromise with different idea and opinions, compensate on each other’s workload and share knowledge across one another. As all of us comes from different education background, we can feel that each of us are bringing our expertise and knowledge to achieve our common goal.
What's next for Travel Partner
As there are still limited number of questions related to travelling that Travel Partner could answer, we want to gather feedback on what other questions that Travel Partner should learn to answer. We would love to introduce Travel Partner to our friends and share it with the community at Product Hunt. In addition, we could foresee that the complexity of the question structure would increase when Travel Partner “knows more people”. For instance, Travel Partner can now deal with questions like “from A to B by X”, however some users may be a perfectionist who need “the cheapest way from A to B by X with the least traffic jam”. This would be a challenging task for us to come up with an efficient coding logic and infrastructure to achieve this. In the meantime, it is also interesting to integrate idea that Travel Partner could help during our commute. For example, setting an alert to remind us getting off the train. Otherwise, we will miss our dates again.
Built With
facebook-messenger
python
redis
wit
Try it out
m.me |
10,007 | https://devpost.com/software/lilo-ai | Inspiration
We were inspired by Siri, Alexa, and Cleverbot
What it does
It detects the emotion of a user and based on that data produces an output that mimics friendly human interactions.
How we built it
We used wit.ai, GitHub and glitch
Challenges we ran into
We are an international team so the biggest challenge was to select the best time to brainstorm.
Accomplishments that we're proud of
We have created an NLP model that detects human emotions, and its a first step to creating an artificial friend on Facebook
What we learned
That everything in software development has to be done by a team and that teamwork is the best tool to solve problems.
What's next for Lilo AI
We want Lilo to become the new way of human-AI interactions. We want to populate the database with more utterances, teach it how to detect more emotions, and train it on more sitcoms, books, and cartoons to become a great conversational companion. We hope that in the future the AI could become your new best friend.
Our wit.ai app id:
618990298698878
Our code:
https://github.com/brahada/lilo-ai
https://glitch.com/edit/#!/chill-rightful-jacket?path=wit_handler.js%3A19%3A4
Built With
github
glitch
javascript
node.js
wit.ai
Try it out
www.facebook.com
glitch.com |
10,007 | https://devpost.com/software/music-ally-trained-t9zao1 | Brand Logo
Inspiration
A problem we've seen many beginner music students have constantly is finding the motivation to dive into the nitty-gritty details of music. Many of these students find topics such as intervals, chords and progressions a bore compared to the myriad of distractions online (Yes, Facebook. That's you!), and so we thought: Why not stop trying to
fight
these distractions and try to
integrate
with them instead? Wouldn't it be great to have a musical companion bot which music students (and anyone who's interested) could ask their questions and even get a dose of inspiration to love music? Thus, Music.ally Trained was born --- and the rest is history :)
What it does
Music.ally Trained is a bot which provides a quick and user-friendly way to get started with the basics of music theory and to discover new music! Here's what it can do currently:
Return an interval given 2 notes
Return the notes in a chord given the chord's name
Return songs which include a specified chord progression
Return a musical joke
Return information about a composer
Return information about an instrument
Jukebox - helps you pick a random song for your next karaoke session!
How we built it
Music.ally Trained is built in Python, using a Bottle app deployed on Heroku. To help our bot achieve musical intelligence, we employed the use of the Mingus library and integrated the Spotify and Hooktheory APIs into our app.
Challenges we ran into
The main challenges we faced stemmed from being new to Wit.ai and Facebook's ecosystem, as well as a tight timeline given that we started the project with a week left to the submission deadline.
As we only had 1 Developer account and so many features we wanted to implement, it was a challenge for us to find a workflow which would allow us to build, test and refine our work smoothly. Moreover, with strict social-distancing measures in place, we found ourselves spending more time and effort trying to find an efficient way to collaborate rather than focusing on the features of our bot.
With regards to Wit.ai, it was a challenge determining which labels to give to our intents and entities as the lines between them got blurred the more we ventured. As for Messenger, it was difficult to test our bot as it required the creation of a Developer account and it would also be a tedious process to make our app live to the public.
Accomplishments that we're proud of
Firstly, we are proud to have completed this project with only 1 Developer account despite having 2 team members as we had to think of innovative ways to ensure both members could test our bot after making changes to the code. Thus, we decided to take advantage of technology, utilising a combination of communication platforms from Skype's screen share to VS Code's Live Share and to traditional Whatsapp messaging, such that we were able to keep making progress towards the finish line.
Additionally, we are proud that we managed to integrate as many APIs and libraries as we have! Initially, we didn't believe we would need to use many third party tools but by the end of the project, we are proud to say that we have integrated the Spotify and Hooktheory APIs as well as the music theory library Mingus into our project, and have experimented with many others as well!
All in all, we're proud to have accomplished this many features given the short amount of time and how little prior experience we had. Though we ran into many errors along the way, we're glad we were able to press on and crush all (or most) of the bugs!
What we learned
On the technical front, we learned how to create an intelligent bot using Wit.ai and integrate it with Facebook's Messenger, but beyond that we also learned about various deployment methods (such as Glitch and Heroku), a bunch of music APIs and libraries, and Python, our choice of programming language.
Furthermore, we learned about collaboration in the context of software development projects through experiencing challenges such as managing different versions of our app, dealing with merge conflicts due to code changes, and the endless Googling in order to fix our bugs. With regards to our technical challenges, we must definitely mentioned how helpful the Facebook hackathon community has been, providing us with timely support and advice whenever we felt like we were really stuck. As such, we are thankful to have learned from the experiences of other developers and looking back, we can now definitely see the importance of not being afraid to ask others for help when we are stuck and hopefully we ourselves can provide the same help and guidance to others in future.
Lastly, looking at the amount of time and effort we put in for this hackathon, we can now better appreciate the amount of planning, communication and focus required to deliver a project by a given deadline. Perhaps, it is the immense satisfaction we get when we finally achieve a final product which drives most, if not all, of us to do what we do.
What's next for Music.ally Trained
We would definitely love to expand on the functionality provided by Music.ally Trained as we do have many ideas for further improvements.
For instance, we would like to implement Messenger's Private Replies function to allow users to receive links to useful musical resources so that they can further expand their knowledge given that there are limitations to almost any bot's capabilities. However, since our Facebook page is new and has neither any useful posts nor users, we decided to leave this feature as an option for future work instead.
Moreover, we also have plans to broaden the range of music theory questions users may ask and we believe this would be relatively easy to accomplish as the library we chose, Mingus, offers many useful functions for learning about music. Hence, our project would be easily extensible using it and is rather flexible in this sense.
Additionally, we are also planning to tap on Natural Language Processing, to improve our bot's persona to be even more light-hearted and engaging, so that we can sustain the attention of our users.
Lastly, we hope to further improve our bot's intelligence through more rigorous training such that it will better handle misspellings and provide users a better experience overall.
Built With
facebook-messenger
heroku
hooktheoryapi
mingus
spotify
spotipy
wit.ai
Try it out
www.facebook.com
github.com |
10,007 | https://devpost.com/software/mq |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Hanah by Metaquid
Inspiration
Develop a general AI, imagining it in the future, which returns to the present to modify itself.
Tell his story in images and make those images become real when you really talk to this AI entity.
The future is realized in many ways but the best way is to anticipate it!
What it does
Metaquid is an AI that can learn and face free speeches;
interacts with writing and voice;
interacting with people he gives and receives useful stimuli for the subsequent development of the graphic novel;
in private mode: dialogues remain private and not shared - safe mode;
in public mode: dialogues are shared and therefore can interfere with each other - risky mode;
selecting the item allows you to customize the Hanah holographic avatar;
activating the microphone allows you to activate the wit.ai voice recognition service.
How I built it
To function in a widespread way it was developed as a PWA (Progressive Web App)
Works both on pc and smartphone, cross-platform, ...
It is connected with the functionality of the metaquid.com blog for later development.
I used the server side PHP7 and client side javascript languages.
Developing in the wordpress environment I used the jquery framework already present for some features.
Challenges I ran into
Voice integration without using any framework required a lot of trial and error.
Server-side development in PHP7 has been going on for many years and will continue.
The interference between the wordpress environment and wit.ai integration took me a long time to understand.
The PWA features were not immediately clear to me at the beginning, but in the end I understood what to do.
Accomplishments that I'm proud of
Giving the voice and recognizing the voice to AI adds that level of reality that was missing.
Being the creator of AI I am proud to have thought of giving him a story through the graphic novel.
What I learned
I learned to do basic PWA: Add to Home screen (or A2HS for short).
I started learning how to use wit.ai but I will continue to understand more.
I learned to make videos by assembling the images of my comics.
I learned to do without frameworks to keep the code simple and maintainable.
What's next for Metaquid
The next step is to make a version for facebook messenger.
Metaquid will always be in development in my plans until it becomes what is described in the graphic novel.
in reality the short circuit created between the fantasy of the graphic novel and the reality of development will bring new ideas.
Built With
javascript
jquery
linux
pc
php7
smartphone
windows-10
Try it out
www.metaquid.com |
10,007 | https://devpost.com/software/moodanalyzer | Home Page
Detected Mood using wit.ai
Quotes for uplifting the mood
Inspiration
Mental health is an important part of our life. It impacts our thoughts and our lifestyle. We decided to do something good for society by making a small effort in improving people's moods. As Mother Teresa said "We ourselves feel that what we are doing is just a drop in the ocean. But the ocean would be less because of that missing drop", we believe this small effort will make an impact on people's lives. About 900,000 people die due to suicide every year worldwide. Mood swings are the prime cause for a person to attempt suicide. This has pushed us to ponder over this issue deeply. Here is our small effort to analyze a person's mood based on their daily activities and soothe it in case of a disturbed mind.
What it does
Our application asks users to describe their daily activities using which we detect their mood. With their mood known, we ask them to read quotes and watch videos that have been specially catered for uplifting their mood.
How we built it
We built this application using the wit.ai NLP framework and Flask. We trained the wit.ai application for 8 different moods using several hundreds of utterances in order to detect the mood based on the given phrase describing a person's activity. Then, we developed a website using Flask and Python which takes the user input (which offers both textual and speech-to-text recognition) and connects to the wit.ai using REST APIs. Based on the mood output, it displays the detected mood and illustrates a set of 4 quotes from our pre-defined collection of quotes catered for each mood. Additionally, it selects a video from a collection of video URLs for further impact. In case, a person provides some irrelevant input, the application automatically handles and displays an error requesting another input.
Challenges we ran into
We had to learn the wit.ai framework and its integration with our Flask application which took a while initially as the quick start guide offers limited information. Additionally, integrating the speech-to-text recognition system posed a challenge while integrating it into the core application. Apart from the challenges faced in the web development, it was very cumbersome to train the wit.ai app manually with several hundreds of utterances as it offers no easy way of providing a file input containing a dataset. We performed this tedious operation for training each of the eight trait values associated with the moods, apart from collecting a library of quotes and videos for each mood individually.
Accomplishments that we're proud of
We are proud to help society by providing a platform that can uplift people's moods in times when the world is dealing with the ongoing Covid-19 global pandemic leading to the prevalence of discouraging moods among people.
What we learned
We learned how to use the wit.ai NLP framework, how to build an application using Flask, how to deal with REST APIs, and speech-to-text recognition functionality. We also learned how Precision/Recall confidence scores change as we train a model for multiple traits. Last but not least, we learned how to collaborate with team members, work together through virtual platforms, and test our leadership skills.
What's next for Mood Analyzer
We would like to expand our domain by exploring more varieties of moods (apart from the existing eight) and also plan to train the wit.ai with more examples. Additionally, we would like to explore further possibilities of detecting moods beyond the recordings of daily activities and also provide more support apart from displaying quotes and videos.
Built With
css
flask
html
javascript
python
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/medicai-3ir7l8 | Complete Web App Screenshot
Facebook page for MedicAI
Web app front page
Working Bot 1
Working Bot 2
Inspiration
-> We were inspired to work for this project after the lack of quick medical emergency facility available till now.
-> Most of the pharmacy delivery system requires you to undergo lot of tasks before ordering medical items. And these private apps aren't even available in most of the regions.
-> In many areas people don't know much about these facilities or neither know how to use a mobile application.
-> The whole process of payment and services is a huge headache as every time it is to make a transaction without the option of single click and quick buy options.
-> Neither we have seen a system that can take users assessment and suggest real time medical items.
-> We believe that at the time of emergency, its not practical to wait for a person to help us buy medical items.
-> We wanted to leverage power of a common portal like Facebook that is used by millions of users and neither a user must have headache to keep installing independent apps for emergency purpose.
-> Thus we built an Intelligent chat bot application called MedicAI.
What it does
-> Medic AI is your day to day personal assistant. With its smart cognitive intelligence powered by Wit.ai engine this artificially intelligent assistant delivers you the best and most possible accurate results each time.
-> We use Facebook messenger as bot to take very few questions and leverage complete delivery of essential medical items quickly.
-> We have inbuilt wallets which can be recharged earlier. So as it makes the lives of users much easier by making a single click buy system.
-> Medic AI has been created to cater to the needs of millions of people who can easily get access to all the necessary services virtually from home.
-> It’s specialized algorithm helps you find the nearest medical centers, buy emergency medical kits and even assess your health instantly with the help of its symptom checker.
-> When a user enters the details of medical items he is prompted for the nearby store for the purchase, which on selection can be received by the particular pharmacy stores.
-> The bot also has a self-assessment system in which users can check for any problems they have by talking it out with the bot. The bot is clever enough to help user with appropriate suggestions.
How We built it
-> For the chat bot we used Wit.ai. The NLP system helped us to create an powerful AI bot on messenger.
-> We have created a Facebook page for our services.
-> We made our Website using Bootstrap and integrated Facebook messenger services to our application.
-> In Wit.ai we used the python as the language and later integrated it with the Facebook messenger.
Challenges We ran into
-> We had never worked with a creation of bot services. So the process was quite challenging for us.
-> Integrating the bot created with wit.ai with our Facebook messenger took us some time.
-> We had never integrated a bot application with a live website, so this process was bit of a task.
-> We are developers of different universities and have our exams going on. So it was bit challenging to work in between and make this project.
Accomplishments that we are proud of
-> Working with a team and completing project during tough times is something we all are proud of.
-> Building an ecosystem for making peoples life easy by empowering Facebook's awesome technologies feels great.
-> We had never worked on a chat bot project before. So this allowed us to learn lot about bots and its concepts.
What we learned
-> Team work and Time management
-> Understanding Facebook's messenger technology.
-> Integrating chat-bot to a web application
-> Integrating Wit.ai to messenger using python.
-> Making process for demo video.
-> Hosting an bot based website on cloud service like Heroku.
What's next for MedicAI
-> This is our idea and we have presented an demo of our application. Next we plan to work in depth of the chat application.
-> Make more users understand about this simplicity of using Facebook for emergency and SOS services.
-> Help pharmacy stores with usage of application.
-> Make more assessment features for users.
Built With
bootstrap
bot
facebook
facebook-messenger
github
heroku
python
wit.ai
Try it out
github.com
medicai-1.herokuapp.com |
10,007 | https://devpost.com/software/911-assistant | Fire on Hacker Way 1
Inspiration
When people call 911, they want it picked up on the first ring, usually with a good and "urgent" reason. But people often have to wait longer than expected due to a lack of 911 operators especially during the pandemic of infectious disease just like now.
In South Korea, where we live, one sick employee has taken out the entire call center.
Of 97 confirmed cases, 94 were working on the same floor with 216 employees, translating to an attack rate of 43.5%. It's a lot bigger than the household secondary attack rate among symptomatic case-patients which is only 16.2%. So It's clear that call centers are vulnerable to infectious diseases due to the nature of the jobs that don't have a large space between people and have to keep talking. Also, it's important to isolate the suspected patients in the early stage to block further transmission in crowded work settings. But what if a 911 operator gets infected and the entire center goes into quarantine? Then who will answer the calls?
What it does
AI Assistant for 911
converts your voice directly into text and automatically extracts key information with an AI model that has learned emergency call scenarios, significantly reducing the speed of call managing.
Relay the voice from operator to caller, and vice versa
Recognize the voice and extract the valuable information by using Wit.ai STT API
Visualize the information on a browser by using React
How we built it
Wit.ai provides most of the functions we want to implement. Thanks to Wit.ai, The only part that we spent on NLP is adding a separate model to Wit.ai by recognizing special entities such as emergencies and injuries.
The STT function in Wit.ai takes the part of converting voice data into text, and the built-in NLP function can extract and classifies entities like place and time.
Challenges we ran into
Our biggest challenge was streaming audio between browser and server. There was several methods to achieve this goal; especially using WebRTC and establishing the call. But it was too hard to achieve quickly; we have to construct STUN/TURN server to properly use this protocol.
So we convert the audio to a series of buffers and streamed it to server by using socket.io. We tried to transmit our audio buffer as soon as possible to the opposite side, and we achieved only 1s latency while the call.
We decided to stop developing the advanced features for this since this is a demo, but we think that we can construct the appropriate call structure by using the proper infrastructure.
What's next for 911 Assistant
If this gives us a certain level of accuracy, we can introduce it into the queueing system
where a simple scenario would allow us to proactively extract key information from requests waiting to be answered, thereby reducing a huge amount of time to manage emergency calls.
Reference
Coronavirus Disease Outbreak in Call Center, South Korea - CDC Paper
Built With
audio
react
socket.io
typescript
webrtc
Try it out
github.com
chadolbagi.github.io |
10,007 | https://devpost.com/software/hear-everthing | Inspiration
I wanted to create an app that could help people who have difficulties in this pandemic to connect to their love ones online to have it more easier by controlling their social media by voice. With this app my goal is to make it more easier for people with visual impairments with a tool that could automate social networks and google searches
What it does
The app uses selenium to control the user browser of the user to login into the user social media and post depending on the voice command. It also can do google searches and read the information of the web page.
How I built it
I created a wit.ai app and created intents to classify the different social medias that I was going to use for the app.
I started to create a app with python that had a simple GUI that the user could only had to open the app for it to work.
With selenium I created functions depending on the social media to automatize the log in and the posting.
What's next for HearEverything
Finish the software development of the app
Create more options for the user to enjoy more of each social media
Create a way for the user to also use the app to play music
Find more ways to build a easier GUI for the user
Built With
gtts
python
selenium
speechrecognition
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/jobproctor | Lets get started
Choose from multiple features
Preferences and skills section
Recommendations tab
Inspiration
Refusing to be ordinary has been the motto of our team at JobProctor since the very beginning. We are a team of high spirited engineers who want to tackle the problem of unemployment in the gig economy for the people from the lower strata of society.
Given the uncertainty in the world and the ever-increasing number of people losing their jobs due to the global pandemic, we decided to come up with a platform that can empower our society in such tough times. Thus, we decided to tackle the giant of unemployment which would further worsen in the near future.
In our opinion, the people who would be the worst hit by this situation would belong to the unorganized informal industry. Considering Facebook and messenger’s penetration through all the sections of our society, we felt it was best to use this platform in order to reach a large number of people who are in search of jobs and do not use traditional job search platforms like LinkedIn, etc.
Our app aims at organizing and managing the informal job industry. Such engagement does more than increase productivity, it decreases attrition, reduces snafus, and rationalizes the cost of operation; all while giving a much safer cultural fit.
JobProctor’s mission is to create a transparent and ethical and efficient job-search platform for all domestic and gig workers and household employers, and provide bespoke platform features to assist and support users throughout the employment term.
What it does
JobProctor is an interactive and easy-to-use chatbot on messenger, where people can search for jobs, create jobs, create personalized alerts for particular openings and also apply for these positions via messenger.
The semi- and unskilled workforce in India are expanding as demand for everyday services has increased in urban areas. From delivering food and appliances to helping with home maintenance and carpentry work, the segment is growing exponentially, mostly driven by rapid urbanization. There are several job search platforms available, but all of them are concentrated in the professional and white-collar sectors. We do not have a leader in this job sector. All these factors added to our heartfelt desire to make rural India economically self-sufficient lead to the isolation and selection of this particular problem.
Today, unskilled and gig workers are looking at savings, location, living conditions, and a community, which are some of the key factors in determining the willingness for them to take up a job. Our solution caters to all these factors and provides a personalized job search considering all such factors. We aim at fostering better job opportunities for workers and domestic help. Our venture will also help promote local businesses and mom-pop stores who are in search of workers. We want to broaden the horizon of opportunities for domestics and unskilled workers.
How we built it
Explained below are the features of our apps and how we built them.
Create a Job Posting:
We allow employers to create job postings instantly. All job postings are saved in our database and also in Google’s Cloud Database to ensure they reach the right audience. All the added jobs go through our reliability model to notify users about sham or fake postings so as to safeguard them. Employers can add more details to make their job reliable.
Show Job Postings:
This allows users to view their job postings. Edit or Delete them.
Get Alerts:
This unique feature helps the users to keep a track of all the positions he/she is interested in and get daily updates for the same. You can just type ‘Alert’ to see Set one or Delete an existing one.
Google’s Recommendation Engine:
We have used Google’s Job Search v3 API to make sure users get the right recommendations when they add their skills/preferences. Google’s API indexes the added jobs and recommends them in order of highest relevance with respect to all preferences.
Auto Complete Feature:
We also allow users to paste a job description into our Messenger Interface. The bot leveraging
Wit.AI’
s amazing technology is smart enough to identify the key parameters like
Job Title, Salary Range, Work Experience
to make an effortless experience for employers.
Automatic detection of possible fraudulent jobs:
This feature helps us to see the degree of legitimacy of a posted job and bolter the decision of an individual while applying for the same. In order to achieve this feat, we integrated our app with a machine learning model which predicts the percentage of the legitimacy of a job in a job posting.
The technology arsenal used to build this feature consist of python(with libraries like scikit-learn, xgboost, pandas, hyperopt), flask, and Heroku. Python is solely used in order to build the ML model while Heroku and flask are used to host the model and run a server to listen for Http Requests respectively. Diving into further details,
Dataset Description:
The dataset used is an annotated public dataset with 17,880 job postings with 900 fraudulent jobs. Each record in the dataset is represented as a set of structured and unstructured data with the label as if the job record is fraudulent or not. The dataset is highly unbalanced which is dealt with using oversampling the minority label.
Data visualization and feature engineering:
In order to understand and better model the task at hand, we analyzed the data through visualization and built a proper understanding of the same. The categorical features like employment type, department, and experience needed were embedded using CatBoost categorical encoder. The job description associated with a job was cleaned and a 100-dimensional vector embedding was created using Doc2Vec.
Model training:
The model was trained with various models in order to select which of the algorithms proved promising for the given data. Finally, we decided on the top-performing classification algorithms, xgboost and RandomForest, and ensembled them to create our final model.
Optimisation:
Optimization of the hyperparameters of each algorithm was done using Bayesian Optimization. The final set of hyperparameters which yielded the best result during validation.
Hosting:
In order to integrate the above functionality into our application built in Node.js, the model was hosted using flask locally and then publicly using Heroku. For every job posted the application fires an API request to the hosted model, to which it answers with the legitimacy prediction, which is displayed on our application.
Challenges we ran into
We had a holistic experience full of ups and downs that further broadened our approach towards tackling problems, both in tech and socially.
Learning and creating an app in Node js and getting familiarized with Facebook’s Messenger Platform was the key part of our journey. Integration with Google’s Job Search v3 API was one of the cardinal challenges since the API had little documentation and sources to refer to. The next part of our journey was identifying how we can make our system reliable and it was at this juncture that we thought of having a reliability system in place.
During the legit job identification model building, the public dataset was severely imbalanced to which we dealt with using oversampling of the minor classes. This way the model was better able to generalize on the features that permit a job to be flagged fraudulent. Another challenge was to integrate the model built-in Python with the application which was in javascript.
The workaround to this was to create an API that links both. The main app calls for the prediction of the model with the details of jobs, the hosted model receives the API call, predicts the legitimacy of the job, and sends the prediction which is then shown in the application.
The asynchronous nature of Javascript made things difficult while we tried communicating with different components of our application which are interdependent on each other for data. Designing user interactions and experience was also another challenge. Choosing from the available plethora of UI frameworks that offers most of the required components and also looks modern was also a part of the design process. We kept reiterating the design process as the app progressed to come up with a more intuitive user experience.
Also, other challenges of implementing JobProctor include: how to encourage its initial usage, and build a ‘trust’ community with users on the platform; how to build upon initial momentum towards strong user retention; creating conditions for social awareness among employers in host-countries; increasing conditions for platform accessibility for those within the identified demographic, but are digitally-handicapped and/or in hard-to-reach areas.
Accomplishments that we're proud of
We have classified our accomplishments into two baskets, a technology bucket, and a social impact bucket.
To begin with the former, integrating the Google Job Search API was something that was a blocker for our way since we wouldn’t have been able to provide our users with the much needed personalized suggestions. After following the documentation thoroughly, the team was finally able to get past it and we were happy we could bring this to our users.
We wanted to reduce the number of online recruitment frauds, especially employment scams, which may lead to privacy loss for applicants and in turn, harm the reputation of various organizations involved. Our application provides a way to solve this problem by using machine learning. This way the app can reinforce the trust that we form with the aspiring applicant’s community.
Applying the idea of organization and management to the informal job industry in India is an unprecedented task. Innovation shines through JobProctor’s easy-to-use mechanism, which is designed to engage user segments by giving personalized and timely alerts and updates. Inbuilt platform features aim to continue supporting employers as well as employees throughout their job-hunting process.
Team JobProctor is proud of the fact that we could successfully use Facebook's penetration to reach out to such an often neglected section of our society and thus create a positive impact in their lives by exposing them to infinite opportunities of progressing their careers.
What we learned
The main takeaway for our team was to appreciate how tech-dominated if implemented in a simple yet elegant way can serve a larger purpose for the greater good of our society.
The satisfaction with the fact that JobProctor will positively impact the growing number of increasing informal workforce in India along with the expanding migrant populations is yet another takeaway.
JobProctor’s backbone lies within SDG 10: Reduced Inequalities, and Goal 10.7 — “to facilitate orderly, safe, regular and responsible migration and mobility of people […] through the implementation of planned and well-managed migration policies,” alongside Indicator 10.7.1 to measure impact (“Recruitment cost borne by employee as a proportion of yearly income earned in country of destination”).
What's next for JobProctor
We plan to increase the job posting on our platform by a large number by 2021. We also aim to provide support in regional languages. We also look forward to implementing voice-based conversations keeping in mind our target audience. We want to add bio finder functionality to our application. We also have global aspirations with the platform and are aiming to provide a meaningful livelihood to 120 Cr domestic workers and blue-collar individuals.
Built With
angular.js
category-encoders
css
flask
gensim
google-job-search-v3-api
heroku
html
javascript
matplotlib
nltk
numpy
pandas
postgresql
python
seaborn
sklearn
uikit
xg-boost
Try it out
m.me |
10,007 | https://devpost.com/software/x-1oqc6m | Infrastructure diagram
Contacts section
Calibration of the user's speech when sober
Notification when Lucid detects a high intoxication level
Inspiration
The United Nations health agency's reported that alcohol causes more than one in 20 deaths globally each year, including those resulting from drink driving, alcohol-induced violence and abuse and a multitude of diseases and disorders.
Yet another study conducted in the US found that 11% of women have experienced alcohol or drug-facilitated sexual assault at some point in their lives, and 5.5% of men were made to penetrate someone else through alcohol/drug facilitation.
We imagined a world where there was much more lucidity, and a clearer awareness about one’s own degree of intoxication to avoid making dangerous decisions. Why were we relying on legacy devices like breathalyzers that were expensive and impractical, when we could bring this knowledge into the hands of the masses?
We believed that if the individual could signal for help once they were past a certain threshold of intoxication, they would be able to get themselves out of potentially precarious scenarios.
What it does
We created an intuitive app that could enable users to self detect their level of intoxication, automatically reaching out to saved contacts once the user presented that they were overly intoxicated.
How we built it
Our app detects intoxication on two fronts, leveraging both mental and verbal cues in order to come to a conclusion on the user’s intoxication level.
Since intoxication causes impairment of cognitive functions (Fillmore, 2007), the basis of checking for intoxication is through time-sensitive simple logic puzzles. The app will ask the user a series of questions. A sample question would be: “If you have to wake up for a meeting at 8am tomorrow, what should you do?”. The user is expected to respond with possible commands, that wit.ai can then process. If the user returns an incoherent or illogical answer, the audio file of the user’s response will be sent together with a notification to the user’s saved contacts.
This audio file will also be processed by our RNN ML model, which will consider the following speech properties that are affected by intoxication (Marge, 2011):
Clarity of pronunciation: Intoxicated users tend to have poorer speech clarity
Pace of speech: Intoxicated users tend to speak slower
Pitch accents: Intoxicated users have higher/lower emphasis frequencies as compared to when they were sober
Challenges we ran into
We were limited by the fact that we did not possess a comprehensive dataset of audio files at varying degrees of intoxication. This dataset would be necessary in order to build the speech recognition capabilities of our app. In order to mitigate this limitation, our app will also actively be collecting data. This happens when we send the audio file of the user’s response to his saved contacts, and the contact responds to this audio file by identifying it as “Sounds OK” (not intoxicated) or “I’m on my way” (intoxicated). Clicking either of these buttons serve as a human form of verification, generating data that is cleanly labeled as either intoxicated or not. These datasets can then be used to further refine our RNN model, improving its accuracy in detecting intoxication via audio files.
Accomplishments that we're proud of
Building a working prototype!
What we learned
Data considerations - we wanted to make sure that users' privacy was not compromised, and so we thought also about anonymizing our data collection as well.
What's next for Lucid.ai
Refining our Machine Learning model and ensuring higher degrees of accuracy for our product.
Built With
adobe-xd
flask
python
react-native
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/companion-9u5kxc | Inspiration
Usually I will not like chatbot because I got the feeling that am chatting with robot which will not understand like human. So, using Wit.ai needs to provide human level conversation.
End user needs to feel like he is talking with human. So we made every response typed instead of giving options to users(i.e.which will give Robotic chatbot experience).
Main theme : Everyone problems needs to be addressed.
Companion chatbot - Once after the conversation, it gives positive approach which is scarcity in society.
What it does
Step 1 : Companion Chatbot will listens to your problem
Step 2 : Provides multiple ways to solve it. Solution will be in various ways(Books, Videos,Websites).
How I built it
Using Facebook messenger product and wit.ai.
Once the user interacts through facebook messenger, using wit.ai utterances and corresponding response is handled in node.js.
Challenges I ran into
Problems to be chosen - Currently chose Work and Family after discussion with the team.
Response make the person to be interactive and give feeling like talking with human.
Accomplishments that I'm proud of
Learnt Node.js - UI framework,
Discovered the messenger and Wit.ai features,
Connecting multiple applications and makes working as single application.
What I learned
Working Chatbot learning experience from scratch
What's next for Companion
Plan to add more problem areas and make the interaction more like human feeling.
Built With
facebook-messenger
node.js
wit.ai
Try it out
www.facebook.com |
10,007 | https://devpost.com/software/haylingo-your-new-language-practice-companion-amcxfb | Inspiration
Eager to learn and be fluent to acquire English language motivates me to make something that can help people like me can easily practice my target language.
What it does
Main Feature
HayBot
a fast way to practice your new language with a conversational AI bot.
HayFriend
connecting you with real people across the world who enthusiast to practice a new language too.
HayWord
Play guess the word game to enrich your vocabulary with the fun way.
Support Feature
Translate
you can translate by word or sentences powered by wit.ai language identifier, NLP and NLU.
Pronounciation
listen to how the word is pronounciate
Quick Reply Feature in Conversation
Translate All
make it easier for you to understand the last chat text.
Change to Speech
a mode that made for accustomed you to listen to the new language.
How I built it
using FB messenger with node js as backend and typescript as languages, processing user input with wit.ai to determine their intent for translate and pronounce also to identify the user input language to assign the parent language of the user in MongoDB, and for translation service and text to speech, I used was and wordapi for hayword feature.
Challenges I ran into
send an audio file from aws polly to bot messenger.
bridging users.
-create a guess word game in messenger
Accomplishments that I'm proud of
can use wit.ai for the first time and integrate it with messenger
bridging users
send an audio text to speech to messenger
make a guess word game to enrich the user vocabulary.
What I learned
a lot of wit.ai and messenger api
be patient for facing stack and error during code
What's next for HayLingo! - Your New Language Practice Companion
iteration for getting product-market fit
expand to other new languages like Korean, Japanese, Spanish, French and etc
Built With
aws-translate
cleverscript
facebook-messenger
mongodb
node.js
polly
s3
typescript
wit.ai
wordapi |
10,007 | https://devpost.com/software/botfind | Bot_find
Inspiration
To ease the stress to programmers getting answers to a bug
What it does
It scraps through doc files to give answers to a bug and also web pages where an answer has been provided already
How I built it
Built with wit.ai
Challenges I ran into
Linking the bot system to doc files and web pages where answers can be gotten
Accomplishments that I'm proud of
I gained some insight about wit.ai
What I learned
How apps can be created with wit.ai
What's next for Bot_find
Linking doc files with it and using it with a web page and an app were programmers can get answers to bugs and in sigh to a concept
Built With
wit.ai |
10,007 | https://devpost.com/software/facebook-messenger-chatbot-boilerplate | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Facebook Messenger Chatbot Boilerplate
Built With
mongodb
redis
wit.ai |
10,007 | https://devpost.com/software/robodoc-p3lb8h | RoboDoc
breast mammograms
Chest x-ray
working demo
COVID19 symptoms and analysis
Inspiration
We came up with this idea because of the lockdown caused by the coronavirus pandemic people are confined to your homes and it's a bit risky visiting the hospitals as there are chances of getting infected. We all are facing problems because of the coronavirus pandemic where people are in a state where anyone suffering from a disease is thought to be suffering from
COVID19
. Therefore we wanted to help people reduce this state of panic and thought of building a bot that will answer people’s questions regarding symptoms. That way people will gain knowledge of their disease.
What it does
RoboDoc
is a messenger bot where people could chat with it and get their symptoms analyzed using messages. Based on symptoms sent by the user it analysis and most accurate diseases are diagnosed. At present we have added 21 common diseases we will be expanding it to 87 diseases our main objective was to lessen the panic caused by the coronavirus pandemic. We have added analysis of Frontal
chest x-ray for covid19
and analysis of
mammography for breast cancer detection
.
Dataset
For the COVID19 detection model using X-rays, we used Kaggle and Github dataset accounting for total of 1300 COVID19 and 1200 normal chest x-rays. For breast cancer, we used the Kaggle dataset. For symptoms and disease, we used a CSV file for NLP training.
How we built it
We are using wit.ai for Natural Language Processing and based on symptoms mentioned by users we are predicting the disease and for detection of
covid19
using chest x-ray and breast cancer using mammography we are using tensorflow.js models and javascript
Challenges we ran into
Training using wit.ai was difficult and between the event, there were some changes made to wit.ai. Integrating tensorflow.js models with messenger webhook and integrating all of them into one single project was challenging.
We are trying to make our bot perfect and will research methods of implementation that could improve the accuracy of our bot. We will experiment with other architectures for training our model to improve efficiency. This can be achieved approximately in a time span of a month. After this is done we may look for funding and make it available to people.
Accomplishments that we're proud of
We are proud that we could build a chatbot that will help people to know whether they are covid positive or not. There is a messenger bot itself that detects covid using lungs x-rays and gives you an idea whether you have covid by symptoms is a proud thing for us. We were able to deploy a python trained model into javascript and deploy it on a server is an accomplishment to be proud of, as it was one of our major challenges. And Finally, we are proud that our bot is working as it should. So basically we are proud that we overcame all the challenges and built an application.
What we learned
A deeper understanding of
Facebook Messenger
architecture and how wit.ai works.
Training of NLP using wit.ai. Machine learning model creation, conversion to tensorflow.js, and integrating it with messenger
What's next for RoboDoc
At present we have added
21 diseases
we will be expanding it to
87 diseases
for predictions using all symptoms. We are trying to make our bot accurate and as it is used more we will train it for more symptoms and diseases. We will research methods of implementation that could improve the accuracy of our bot. We will experiment with other architectures for training our model to improve efficiency. We will be including more medical models for the diagnosis of more diseases using x-rays and MRIs. This can be achieved approximately in a time span of a month. After this is done we may look for funding and make it available to people.
Built With
glitch
tensorflow.js
wit.ai
Try it out
www.facebook.com |
10,007 | https://devpost.com/software/eldy-bot | Eldy-Bot responds to a nursing home concern.
Inspiration
During our final year of college, for a final project, we spoke to older individuals about their lives during the COVID-19 pandemic. These individuals were often feeling lonely since their loved ones no longer lived with them and since they could no longer participate in volunteering opportunities. Since COVID-19 can be deadly to adults 65+ years old, these individuals also feared going outside for regular everyday tasks such as grocery shopping. We hoped to create a product that would help out one of societies most knowledgeable and selfless populations during this rough period of time.
What it does
Eldy-Bot is designed to aid the most vulnerable populations during the COVID-19 pandemic.
In order to
help older individuals gather necessities during the pandemic
with messages such as "Eldy, I need someone to go grocery shopping for me." or "Eldy, I need water.", Eldy-Bot can provide the user with a list of nearby people who have any of the requested items or that can provide any of the requested services.
In order to
help older individuals to create meaningful connections and therefore fight loneliness
with a message such as "Eldy, I feel lonely.", Eldy-Bot can help the user connect with people based on their intersecting interests/hobbies.
Eldy-Bot also has the ability to
answer COVID-19 related questions that older adults may have
such as: "Eldy, I have a kidney disease, what actions should I take in order to be safe?" or "Eldy, what's the status of COVID-19 at 777 Brockton Avenue, Abington MA 2351?".
How I built it
Wit.ai
to identify the intents and entities of a user sentence.
Airtable Forms
to allow good samaritans to submit information about the products and services they have available as well as what their hobbies and interests if they wish to connect with an older individual.
Airtable API
to retrieve data (stored from form submissions) in order to craft a response to the users request.
Flask
to handle web requests.
Heroku
to deploy our product.
Challenges I ran into
We were initially using Glitch to deploy our product; however, since it recently stopped allowing pinging services we had to switch out entire code base to Heroku.
Accomplishments that I'm proud of
What I learned
How to create chat bots that can provide accurate responses regardless of the phrasing of a sentence.
How to train Wit.ai to identify specific intents and entities.
How to integrate the messenger service to a Facebook page.
How to use Airtable forms to create a database that stores user information submitted via a form and how to retrieve that information using Airtable APIs.
What's next for Eldy-Bot
We would love to further develop Eldy-Bot identify any organizations that
Built With
airtable
apis
facebook-messenger
flask
heroku
mapquest
nltk
pymessenger
python
rest
wit.ai
Try it out
www.facebook.com |
10,007 | https://devpost.com/software/voluntree2 | Listens to both Feed and Page Inbox
Data collection & account linking in 3rd party tools
Automated answer learning from knowledge base
Convenient "Add to Calendar" button for volunteer
Volunteer can "Share" their activity with their friends and motivate others to sign up
The problem
How do nonprofits reach volunteers? Mostly by creating a new sign up page or adding a Google Form on the website. But you can reach more potential volunteers where they are already spending their time – on Facebook. But doing so can lead to a lot of manual work like keeping track of volunteers on spreadsheets. Imagine having to collect emails from individual chat and create an account in a volunteer management system. Or, going through the spreadsheet for sending important updates to individuals. If the organization doesn’t use any management software, things get even worse. Data gets lost eventually and volunteers have to fill up a form every time they come back. There remains no way to recognize returning volunteers and pay tribute to their excellent work. And of course, the manual bookkeeping process doesn’t scale, especially if the organization has a large number of followers on social media.
The solution
Voluntree comes into the picture to provide automation to the volunteer recruitment workflow from social media.
Once connected to a Facebook page, VolunTree will listen to the feed and page inbox. It will automatically recognize interests from comments and messages. It will initiate data collection, onboard volunteers, and create accounts in the volunteer management software you already use. It will also be able to answer factoid questions by learning from the knowledge base you provide- so that you don’t have to deal with repetitive questions over and over again.
Inspiration
We have closely seen how nonprofits from our local community were struggling due to the manual bookkeeping process during the challenging time of the world pandemic of COVID-19 and it was a great motivation to automate this process with software.
Features
Outreach Tools
Transparent and detailed “Sign Up” posts
Spread the words with multiple pages and posts
See response in real-time and take actions
Sign Up Management
Automated data collection, email verification and onboarding
Account linking with 3rd party integrations
Volunteer profile, activities & ratings
Communications
Automated onboarding and acknowledgment
Automated response from sign-ups, volunteer info and payments
Broadcast event updates in messenger
For Volunteers
Review and respond on the go (post comment + messenger)
Convenient "Add to Calendar" button
"Share" their activity with their friends and motivate others to sign up
Built With
django
react
wit.ai |
10,007 | https://devpost.com/software/umnofon | List of projects for the user
Captured notes
Details of a single note after analysis by Wit.ai
Adding new note with direct input
New note after analysis by Wit.ai
Inspiration
Looking at the doctors at a hospital or building inspectors in the field I see how much time they spend and how uncomfortable it can be to record and remember their observations and take notes.
Their skills and work are
not for scribbling the note
, they
need to focus their attention
on the patient or the building they make.
Let the technology help them by taking care of note taking.
Let technology enhance this note taking.
Make it safer, faster and more productive to better people lives.
What it does
Umnofon is a mobile app companion for a field professional. It uses voice-to-text and NLP models to process, understand and make available digital notes. With help of NLP and built-in inference logic an app produces a report document which is based on the notes that had been submitted by the user.
We had selected a civil engineering building construction case where an engineer needs to constantly monitor the project's progress by visiting a building site. Without our app an engineer will go through the building and either memorize or write down notes which are then reported to the project supervisor. Such approach
can lead to checkpoints being missed (too much to check, limited time), errors (incorrectly captured data) etc.
With our app a civil engineer will dictate notes and they will be automatically recognized using Wit.ai-based algorithm. To avoid excessive repetition an inference algorithm is used to construct complete notes from historical data and incomplete information.
In the MVP app the report lists the resolved notes and their context.
How I built it
The app is split into on-device audio capture and a cloud-based processing. The stack for the app is the following:
App: Expo.io SDK37, React-Native-Paper interface,
Backend: Google Firebase: Authentication, Firestore, Functions, React-Admin
Speech-to-text/NLP: wit.ai
Challenges we ran into
wit.ai approach to NLP is different from my existing experience - the model was rebuilt at least twice,
mapping a real-world situation (in this case - civil engineering way of working) requires very flexible NLP and complex inference logic
Accomplishments that we are proud of
We had built an MVP full-stack application that runs on a mobile device, collects audio and uses Wit.ai for recognition and analysis in less than 3 days.
What we learned
professional settings like civil engineering can certainly benefit from the new AI/ML based technologies,
technology can make real and significant impact in traditional fields - reduce time and non-essential effort,
improvements in process and technology has real life impact - more patients served, safer buildings built etc.
What's next for Umnofon
There are several areas where Umnofon can develop:
security and privacy (encrypted notes)
clean and polish of the app,
new professional fields with specific terminology and models,
new natural languages support,
team collaboration when several professionals work on the same project,
integration capabilities,
compliance (i.e. HIPPA for medical data, GDPR)
Built With
expo.io
firebase
typescript
wit.ai
Try it out
expo.io |
10,007 | https://devpost.com/software/olive-the-bot | Inspiration
One of our teammate Gowtham balachandhiran gave this beautiful use case which can help people with mental stress or depression
What it does
The bot selectively instructs the user to do challenges such as planting trees and asks them to upload photos as proof.
The user has to wait for 24 hours to know the next challenge(disabled for testing reasons). This naturally creates a curiosity to know the next set of challenges. According to psychologist curiosity creates positive vibes and reduces stress. This phenomenon can also suspend suicide thoughts.
How we built it
We have used two application to develop in wit.ai for conversation.
1.Yes or No
2.Sentiment Analysis
Then we used mongodb to get user task images to make final video.
Challenges we ran into
Our bot is working fine with Chrome,Mozilla Firefox and safari as of now.Yet to explore more in other browsers.
Accomplishments that we're proud of
1.we can able to help people who are suffering from stress by engaging them with challenges
2.We're able to try out new tools like wit.ai.
3.Working on new updated technologies and API.
What we learned
We learned lot of new technologies and API.Now we become pro in wit.ai.
What's next for Olive the Bot
We are working on new updates in our bot like adding much more days of challenges so that it will consider as a complete cure for depression and we are discussing to add feature to evaluate the task images using OCR to give user a actual experience.
Built With
heroku
jquery
mongodb
python
wit.ai
Try it out
facebookhackthon.herokuapp.com |
10,007 | https://devpost.com/software/mymate | Inspiration
I belong from an underdeveloped country Nepal and in my country, there are still many places where people die of diarrhea and cholera due to the unavailability of medicines. So, one can easily imagine the status of mental health services in countries like such. If a teenage son tries to share his feelings of loneliness and depression to the family, he is asked not to roam his head around, and instead focus in study. When he tries to share his feelings with friends, they leave no stone unturned to make fun out of him. He becomes more lonely and degrades his mental health. What if, we can in any manner make youths aware of the fact that mental health exists and it is as important as physical health. What if, we can provide them psychiatric consultation within the reach of their fingertips in phones so that they can open up anonymously and without the fear of being judged up. Even the helpline numbers and other arrangements relating to mental health are not in reach of the targetted ones as they are not aware of that. So, realizing the reach of social media like Facebook among youngsters, we settled in the opinion to launch the platform on Facebook.
What it does
Our platform is built with the collaboration of psychiatrists and they have provided us a set of questionnaires that they put forward when any patient first comes to see them. And based on the replies of those questions, we are suggesting to them if they must see a psychiatrist or not. Our platform does two things, one, making youths aware that mental health exists and make them go on a trial test and the other if suspected to illness, provide them a way to fix an appointment with the psychiatrists near to them. We will have a pool of psychiatrists in our team who will provide the in-person or live session if needed. Also, various efforts of government like various mental health-related programs and toll-free emergency hotline numbers will be circulated through the platform.
How I built it
The platform is built using Wit.ai for natural language understanding and facebook-messenger-api to intergrate messenger with the Facebook page. A backend is built using Nodejs hosted in heroku. We will have a page named myMate in Facebook and where users can chat with the messenger bot freely. Also the page will be filled up posts that will help create mental health awareness.
Challenges I ran into
As I came to know about this hackathon lately, so it was quite a challenge to complete the task in so much short time. I have never heard of wit before, but it was easy enough to get a grasp at, particularly at the beginner level. Also due to the poor error handling applied in the platform, we ran into much trouble of debugging. Also native implementation of messenger-api was quite hard for me, so I chose bottender which had quite useful and to-the-point documentation.
Accomplishments that I'm proud of
Firstly I feel proud that I attempted to develop a solution that can positively impact millions of lives around. In this course, I have made contact with various MD psychiatrists from my country and asked for their help. Also, I came to know about this awesome wit.ai tool for natural language processing. I feel very proud to be competing with so many participants from across the world.
What I learned
I learnt how to use the facebook-messenger-api and various things that can be achieved using that. Also, I learnt about the awesome tool wit.ai that made understanding natural language so easy. I have implemented the messenger-bot for the first time.
What's next for myMate
Many things could be improved in myMate. I have only completed a flow for it in this hackathon. Many advanced features of messenger-platform like personas, push notification can be used to achieve various tasks like introducing a pyschiatrist in the chat, notifying users about the confirmation of their appointment date. We can also train wit.ai to better chat with the patient to give them more in-person psychiatrist feel rather than just throwing yes/no questionairres.
Built With
bottender.js
facebook-messenger
heroku
javascript
node.js
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/covid-breaker | Messenger app on Android
Inspiration
In recent months, the pandemic forced governments all around the world to take drastic measures in an attempt to contain the outbreak. This resulted in disruptions to the lives of many people. In online groups, I often find people asking similar questions about new laws, new developments, statistics and contact information. While such information is not more than a search away, it seems far for natural to ask a naturally structured question and for a human to reply with a link or abbreviated information. I created this chatbot as an attempt to provide a natural way of keeping one updated with the evolving situation.
What it does
The answers questions presented by the user with either structured information, like statistics and analytics, or provide links to reputable sources.
How I built it
I build the chatbot with Facebook Messenger as the chat interface, Wit.AI to discern the intent of the user. Information is then pulled from public information APIs and presented to users in the form of a natural sentence.
Challenges I ran into
Sometimes, there's just so many ways to say the same thing! Even with the help of Wit.AI, it's not always easy to get the intent of a sentence correct.
Accomplishments that I'm proud of
I've learnt and built stuff with NodeJS. A completely new language to me before the start of this hackathon.
What I learned
While I still can't say that I'm an expert with Javascript and modern web technologies, I can now build functional web apps.
What's next for COVID InfoBot
Currently, this bot relies heavily on a single API, coronatracker.com, for the majority of its information, and is mostly limited to single questions with no follow-up. Handling follow-up messages to refine the answers is the next obvious step.I'll continue to add more information sources and also provide answers to more complex queries.
Built With
coronatracker
facebook-messenger
wit.ai
Try it out
m.me
www.facebook.com |
10,007 | https://devpost.com/software/jam-destroyer | Traffic snull up on one of Nairobi roads a common vexing occurrence
Traffic Jam Destroyer averter screen shot
Phone front camera scans driver face to check for drowsiness
Inspiration
1.ACCIDENTS CAUSED BY FALLING ASLEEP WHILE DRIVING
Kenya losses more than 3,000 people because of road accidents every year mostly due to drivers falling asleep while on the wheel this is something that many drivers experience while driving, especially now that folks need to use their cars for long distance commute between towns and cities due to lack of flights, sleep is a major problem that results to most accidents, drowsy driving kills.
2.TRAFFIC CONGESTION IN NAIROBI CITY
Traffic congestion is a constant pain for drivers and passengers; Nairobi has been ranked the second worst city in the world on traffic congestion. This results to delays, low productivity at the workplace, fuel waste, pollution and road rage. This has been made worse by fear of using public transport from fear of getting infected with covid-19 everyone is now using private cars on a daily basis
What it does
1.STOPPING ACCIDENTS CAUSED BY SLEEP DRIVING
With the phone mounted on the vehicles dashboard facing the driver the app uses the front camera to constantly scan the drivers face for drowsiness once signs are detected the app warns the driver. To keep the driver active the app initiates a conversation such as telling the driver to recite the alphabet in a descending order to keep the brain stimulated. Also to avoid holding the phone while driving, drivers can now use speech to do the following;
•Reading text messages (SMS) and also reply to the messages through text to speech plugin in the app (hoping to include social media)
•Ask for directions and possible resting places during long trips
•Initiating calls and responding to incoming calls through voice commands
2.ELIMINATION OF TRAFFIC SNARL-UPS ON NAIROBI CITY ROADS
Key concept of this solution is the removal of excess vehicles from the road during pick hours momentarily so as to enable for high flow rate of the vehicles passing through the city center heading towards the city. This shall be done by encouraging persons with vehicles whose registration numbers start with a certain number remain at home or their workplaces for a 30 minute duration while the other half of vehicles use the road, a mobile phone application will be used to determine in an orderly manner which vehicles shall be on hold while indicating which ones should be on the road at that same time.
How I built it
Using HTTP API to send text to the wit.ai server for entity extraction therefore enabling me to use javascript and HTML5 therefore avoiding the need to route my requests to a server, this reduced the latency and response time
Challenges I ran into
Wit.ai does not have text to speech and speech to text which is necessary for jam destroyer app to have due to its use case ie used by drivers while on the road
Accomplishments that I'm proud of
Coming up with a solution that helps my fellow citizens to solve problems they face everyday and costs them lots of money on fuels wasted on the road, this system will have great benefits to Nairobi city if put to use the solution will lead to social, economic and environmental benefits
What I learned
How easy and efficient it is to integrate artificial intelligence that handles natural language to a system using wit.ai
What's next for Jam Destroyer
Scaling the solution to reach other major towns within the region is one of our major goals
Built With
html5
javascript
opencv
wit.ai
Try it out
pronto-legal.co.ke
play.google.com |
10,007 | https://devpost.com/software/how-was-your-lunch | Everyone has a logo nowadays
Inspiration
A healthy diet plays the role of one of the most important factors of human health. As proven multiple times by researchers and nutritionists, one of the key ingredients here is to have a sustainable diet. This is all about developing a habit that you can follow every day, through your normal life or lockdown situations as we have now.
The market is full of various assorted apps that aim to help with controlling food consumption and tracking. Users often spend time building trust with them and trying to create a habit, often finding the process impractical or way too artificial.
How Was Your Lunch is designed to help with nutrition tracking in the most natural way -- through your normal communication. Every day we share millions of food photos on Instagram and tell our friends how yummy was that avocado toast from your local cafe that you had for breakfast. And it feels quite natural, doesn't it?
So instead of installing yet-another-diet-tracking-app, why don't we chat about your daily meals to just another contact in your Facebook Messenger?
How Was your lunch will recognize your language and carefully record this for you, thanks to Wit.Ai Natural Language Processing (NLP) platform. Moreover, it will try to estimate the nutrition facts for the dish that you named (as accurate as it can!), so you can check your daily stats. Of course, the text messages aren't just enough, so you can also share images that How Was Your Lunch will try to recognize and find the best match (again, with nutrition facts). To add to your diary, it will ask you for the mealtime, with Messenger's Quick Replies. With new features to come, you could also leave a voice message if you're in a hurry or just don't feel like typing text.
To keep an impression of talking with the real person, the app has been made to reply and answer to your text with the human language as well. No doubt, sticking to the diet is often hard and any good motivation is important. This is why How Was Your Lunch learns to be a good friend as well and show empathy & support to you.
In future releases, it will carefully guide you as you move on during the day with your meals and help to stay on track, whether your goal is to reduce daily calories, eat more vegs & fibre or stay on a high protein or keto diet.
Technology
How Was Your Lunch is completely integrated into Facebook Pages & Messenger, so no installation needed, just subscribe to the How Was Your Lunch page and start chatting!
The complex but amusing technology layer of NLP and imaging AI is kept completely transparent to a user, exposing only a natural language communication.
Messenger API
Facebook Messenger creates the foundation for the How Was Your Lunch app, connecting it with the user that came to the Facebook page of How Was Your Lunch. Apart from text messages processing and a seamlessly integrated NLP Layer (see below), How Was Your Lunch uses template messages with button replies. It also relies on Quick Replies functionality to instantly ask for the mealtime, when user wants to add a dish from the picture.
NLP layer
How Was Your Lunch relies on the Wit.Ai platform to understand users intent, be it saving a meal or asking for summary. It is aware of several common dishes and can grow to learn more.
With Wit.Ai, the app extracts mealtimes (automatic assignment based on the current time -- to come) and creates a structured summary, if asked for today, yesterday or for the entire week.
Image recognition layer
The app integrates with an outstanding image processing service that performs recognition. So when the user shares a photo, it gets forwarded for ML recognition which in turn results in the details about the dish, including calories & nutrition facts. For demo purposes, it temporarily uses
caloriemama.ai sample API
which has certain limitations but can be replaced with the real one.
Server side
Under the hood, this is a NodeJS application that uses ExpressJS to handle requests and stores data in MongoDB. This gets invoked by FB Messenger platform after applying Wit.Ai NLP layer, and replies back after processing commands or images. The app has been deployed to Heroku, but can be also used with Ngrok to serve from the local machine.
Try out
The technology is in the early prototype stage, so access is limited. If you have already got access, you can follow the steps below:
Visit the page and start messaging.
For text commands, use something along these lines: "I made boiled eggs for breakfast" or "Got a bag of crisps for a snack".
For images, please use examples from here:
http://caloriemama.ai/api
To show a summary, you can ask like this: "What's the summary for today?"
Feel free to contact if any questions.
What's next
This all creates a wonderful ensemble of a friendly assistant that would help you to stay on the track with any of your nutrition plans. Chat with it or share photos like with other friends, and How Was Your Lunch will do the rest.
As briefly mentioned above, there's plenty of more ideas that can be incorporated in the application. To name a few possible directions here:
Personal dietary goals: 5 vegs a day, staying low calories or keto etc.
Small talks and chatter: learn a few tricks to amuse users and keep engaged
Broader nutrition facts: not just calories, enhance with proteins/carbs/fats
Extended food base: learn more basic foods and various recipes
Robust image AI: switch to the practical ML model that'd cope with real-life images
Built With
ai
express.js
facebook
facebook-messenger
javascript
mongodb
natural-language-processing
node.js
wit.ai
Try it out
www.facebook.com
github.com |
10,007 | https://devpost.com/software/chatbot-mz8pxo | Inspiration
Chat bots are creating more common nowadays, but it is difficult to create for small business owners. Easing the creation process will proliferate the use of chat bots for business owners.
What it does
Shop owners create a product with corresponding details such as shipping, material, brand. When the user asks about it in messenger, the corresponding information is replied.
After creating ad post on Facebook, the owner can also create a product reference (e.g black pants for product name "a very large black pants") so that when user comment on the post with the reference (e.g "how much is the black pants") the correct information is replied
How I built it
When a webhook is called from Facebook, the content is passed to wit.ai and the intent is retrieved. The item name is identified and the detail is retrieved. Then, based on the type of question (what, how much, how long) the appropriate reply is sent back.
For comment handling, because there is a context of a post, when the user asks about a generic item (one that does not have a full item name), the generic name is compared against a list of user-defined references and the corresponding info is retrieved.
When an item is found, it also sets a context variable so that user can refer to it in the following questions
Challenges I ran into
Facebook app reviewing process.
Integrating sails js with reactjs.
Accomplishments that I'm proud of
Everything
What I learned
How to build a chat bot.
Wit ai api.
Facebook graph api.
Sails js
What's next for Chatbot
Generic questions (not related to any product)
Built With
facebook-messenger
react
wit.ai
Try it out
chatbot.mysuperawesomeweb.co.uk
github.com |
10,007 | https://devpost.com/software/rescueme-6w3amx | Inspiration and Introduction
Surge in Elderly Living Alone
According to the World Health Organization (WHO) Global Health and Aging studies, we are facing an unprecedented circumstance: The world will soon have more elderly than the young, and more people at extreme old age than ever before. It is estimated that there will be approximately 1.5 billion elderly people aged 65 years and above by 2050. The ageing of the world population will continue to increase, because of the declining fertility rates as well as the incredible improvement in life expectancy.
With this global trend comes the accompanying fact that their has been a surge in the number of elderly living alone in most countries. This fascinating insight is affecting all types of societies be it whether modern or traditional, around the world - none being spared. About 40 percent of women in european countries above 65 are living by themselves. And if we were to dive into even those more traditional countries such as Japan, large family living styles are becoming less popular.
Increase in home deaths
Based on studies conducted by the U.S. Centers for Disease Control and Prevention, that about one in four Americans above 65 years of age fall every year, and an elderly person is treated in an emergency room for a fall every 11 seconds. To top it off, every 19 minutes, an older person passes away due to falling. Many a time, these falls occur in places where they are unable to seek help and are alone. These remarkable statistics point towards a single direction: Something needs to be done to provide support to these people, there is clearly a gap that needs to be closed.
It is also worth mentioning that due to the recent global pandemic, the number of home deaths have increased, some of which have lay undetected for weeks. For example, authorities in Detroit have responded to more than 150 “dead person observed” calls and city officials in New York are recording more than 200 home deaths per day.
First-hand experience
Statistics and global trends aren’t the only factors highlighting the absolute need for a solution - it is also first hand experience as well. Being a Firefighter myself (shoutout to Station 11), part of the work I do involves Rescue Locked Door incidents. These rescues could be triggered by a distress alarm, or a concerned neighbour. Many a time, we arrive at the homes of people days after their deaths only to find that they could have been saved if only aid was provided.
With this being a global concern, there is a need to find a way to address the surge of home deaths and put into place preventive measures in a scalable and sustainable manner. Currently, there is no solution that is cost efficient enough to fit all elderly homes, and that is also at the same time intelligent and proactive. There is an obvious gap and an urgent need for a smart and sustainable solution that would help to save lives.
Solution and What It Does
Our solution is RescueMe, an intelligent proactive door lock that is both voice activated and has a motion sensor which helps to detect danger. The intelligent proactive door lock will listen for phrases that indicate danger, which is sent to Facebook’s Wit.AI Speech API for intent processing. The motion sensor on the lock acts as a detector for non-movement.
When the lock detects danger, not only will it unlock the door, the intelligent lock will also send a notification in the form of a WhatsApp message to neighbours and loved ones so that they can check on the person in danger.
Finished Prototype
We trained our Wit.AI with utterances that best resemble someone falling down. You may find sample voice tracks in the folder wav, if you wish to test it out.
The utterances were linked to the intent, call_help.
After Wit.Ai detects that the speech is, it will proceed to call the servo, which will open the door.
Here it is in action
How We Built It
Connection Diagram
The Particle Photon accepts sound input from the Adafruit microphone and converts it into a wav file, using a set of libraries. The node js server sends the sound byte to wit.ai’s speech API, which returns an intents. If the intent returned is suspected of danger, it will send a POST back to the photon to unlock the door through WIFI. A whatsapp message will also be sent out through the twilio API. Alternatively, the motion sensor will detect non-movement for 30 minutes, after which it will trigger the unlocking as well. Open source libraries that were used include “node-witai-speech", 'particle-api-js' and “twilio”.
Schematic Diagram
Other Power Sources Omitted For Brevity
Challenges We Ran Into
A Mix of IoT Hardware and COVID-19
: The lockdown in our home country Singapore resulted in us having to work remotely, which provided extremely challenging due to the involvement of hardware since our solution is an IoT one. Therefore, we divided our tasks between hardware and software and held many zoom meetings and virtual debugging sessions.
As neither of us are experienced with IOT hardware and technologies, there was a steep learning curve in figuring out how the components pieced together. It was also our first time exploring wit.ai, however the documentation provided was robust and the sharing sessions hosted by Facebook proved useful. It was the feeling of excitement of creating something, and knowing that we really had this end vision in mind that kept us going. We knew that we just had to do it.
Accomplishments That We're Proud Of
Cost Efficient and Easy Adoption
: When we built the solution, we kept in mind that we wanted it to be easily implementable and made accessible to anyone and everyone. The intelligent lock we created would only cost US$30 whereby we are able to provide a not only intelligent but also proactive solution - making it truly cost efficient, especially in competition to its counterparts that may only nail the digital part but is not proactive at all. This means that it can be easily adopted to bring safety to everyone, no matter who or where they are.
Quick Installation drives Business Value
: On top of that, installing the intelligent proactive lock only takes about 5 minutes to fit onto a door. We felt that this is extremely essential to our solution as we wanted it to be able to market easily for maximum impact. It is also not difficult to use. This solution can also be fitted onto any existing lock to drive the same results.
Commitment to Passion Project
: We were able to juggle our work to work on this passion project hackathon of ours, while also at the same time collaborating online. We are proud of ourselves for committing to working on this project and putting together the solution in a limited time frame.
What We Learned
We learnt how to use wit.ai and the potential it has for other use-cases. Will definitely be exploring and adopting it to other projects in the future!
What's Next For RescueMe
3D Printing Design
: We’re also looking at using 3D printing of the case design to hold the IoT parts of our RescueMe solution to enhance the user experience and replace the makeshift set up we have right now.
Multiple RescueMes, One Central Hub
: Beyond just having one RescueMe intelligent lock per home, the next step we are looking into is deploying multiple RescueMes in places such as the toilet and bedrooms that talk to a central hub. This would help to ensure safety throughout an entire compound, especially that of a larger one, or even a building in a synchronised manner.
Sentiment Analysis
: To make the model more robust, we would incorporate sentiment analysis as well into RescueMe. By differentiating between different trigger words and understanding context, it would increase the accuracy.
References
https://www.who.int/ageing/publications/global_health.pdf
https://www.theguardian.com/world/2020/jun/07/uk-coronavirus-victims-have-lain-undetected-at-home-for-two-weeks
https://www.reuters.com/article/us-health-coronavirus-elderly-insight/coronavirus-spreads-fear-isolation-death-to-elderly-worldwide-idUSKBN2172N8
https://www.ncoa.org/news/resources-for-reporters/get-the-facts/falls-prevention-facts/
Libraries
https://www.npmjs.com/package/node-witai-speech
https://www.npmjs.com/package/twilio
https://www.npmjs.com/package/particle-api-js
Hardware
Particle Photon, which includes an ARM Cortex M3 microcontroller with a Broadcom WiFi chip.
Adafruit Microphone MAX4466
DFRobot Metal Gear Micro Servo
HC-SR501 PIR Motion Sensor
Miscellaneous (Breadboard, jumper cables, locks)
Built With
c++
javascript
node.js
npm
photon
twilio
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/willly-the-willpower-manager | Willy- The willpower coach.
Inspiration
Whenever I have tried building a good habit (exercise) or quitting a bad habit (smoking) I have always had moments of great motivation. Most of the time these moments of motivation are triggered by something that I read,see or feel. For example when trying to exercise regularly seeing a fab looking six pack or remembering that amazing feeling you get after a good run, is very motivating. Similarly, when trying to quit smoking reading an article/infographic on how smoking affects your body, or watching a video like this makes you resolve to try harder. The problem, however, is not the lack of things to motivate you but not having an easy way to access them when you need them the most. As James Clear (author of Atomic Habits) says:
“People keep reading self-help and revisiting the same ideas because that’s precisely what we need: to be reminded.The problem is not that information is unhelpful, but that attention is fleeting."
I always imagined a friend who would act as a reminder and feed me these motivation triggers when for e.g. ‘I don’t feel like exercising’ or ‘I feel like smoking.’ And hence the idea for ‘Willy - The trainer for your most important muscle- Your Willpower.’
What it does
Willy is a bot on Facebook messenger who helps you build good habits or quite bad habits.You can share your willpower triggers with Willy and Willy will use these to motivate you whenever you are feeling demotivated or tempted. These triggers can be in 4 forms (for now):
(a.) A quote (that you might have read somewhere)
(b.) A self note (that you write to yourself)
(c.) A video (a link to a video) or
(d.) A facebook post (a link to a facebook post) .
At the time of demotivation or temptation when you need a motivation boost, you just say :
(a.) ‘Willy, I don’t feel like ’ - To motivate you to continue a good habit.
(b.) ‘Willy, I feel like ’ - To motivate you to not restart a bad habit.
Willy works as follows:
To choose the habit you want to build or quit you can either:
Use easy quick reply buttons.
or
Use the phrase:
Start (for good habit) e.g start exercising
Stop (for bad habit) e.g. stop smoking
Choose the options from Persistent Menu.
Next choose the trigger you want to enter by either:
Using the phrase: Add e.g. entering:
Add exercising quote “All progress takes place outside the comfort zone.”
Will save the ‘quote’ : “All progress takes place outside the comfort zone.” in your ‘exercise’ motivation list.
Whenever you want to add a new habit or add a new trigger you can go to steps 2 and 3.
When you need the motivation boost you just write the phrase in the format:
Willy I don't feel like (for good habit) or e.g. Willy I don’t feel like exercising.
Willy, I feel like (for bad habit) e.g. Willy I feel like smoking
and Willy will randomly show you one of the motivation triggers for that ‘habit’ that you had entered earlier e.g. in the morning if you are feeling lazy you can write ‘I dont feel like exercising’ and will show you the quote you had fed it earlier.
Willy will ask you whether you are feeling motivated and if not he will randomly show another motivation prompt for you. He does this a total of 3 times for any one instance, after which he suggests that you either take a break or talk to an expert (a service that we can build later where we connect users to motivational coaches, psychologists and other professionals in that domain.)
How I built it
Front end Facebook messenger.
Backend using node.js , and Neo4J.
Used Wit.ai as the NLP engine.
Challenges I ran into
Quite a few challenges like:
Saving context for the current habit especially when a user has entered multiple habits.
Since we are dealing with free text and the user can enter anything, designing the flow in the way that the user gets only relevant information.
3.Writing the logic for randomly displaying the motivation quote without repeating it too much
4.Using the appropriate phrases for training wit
Accomplishments that I'm proud of
Combining free text and quick replies to design a neat flow of the conversation.
Using the intents and entities in wit.ai effectively to make it an easy conversation.
Effectively designing the error handling/fallback strategy for gibberish content.
What I learned
Combining all the different pieces together to deliver one cohesive solution. Also the use of graphs effectively to save the context of the user.
What's next for Willly the willpower manager
Launching Willy as a voice bot.
Build a service to connect Willy with motivation experts.
Building intelligence in Willy so that he can understand which motivation prompts are most helpful so that he can show them more and also suggest similar prompts.
Adding support for other motivation prompt types like: ‘Voice Notes’ and Images.
Built With
facebook-messenger
neo4j
node.js
wit.ai
Try it out
m.me |
10,007 | https://devpost.com/software/smart-shopper-fp47ri | Smart Shopper
Inspiration
To help small scale business owners survive in post covid era.
What it does
The web application that I built helps local businesses such as grocery stores,diaries etc. to discover more customers living in their vicinity but previously unknown and hence expand their businesses. it also helps the people to find their everyday essentials in an easy way by helping them have the items delivered to their homes by these small scale shop owners.
How I built it
I built my web application using popular web technologies and incorporated the** wit.ai api** to make a
chatbot
that helps users to request their required items. The chatbot then sends the request to the shop owner's page, from where the shop owner can either accept or reject the request depending on the availability of the product.
Challenges I ran into
The main challenge I ran into was implementing the server that helped to communicate between the shop owner and customer page. As of the date of submission I haven't yet implemented a fully functioning server. Another problem that I faced was in using a
noSQL
database like Firebase, as it were my first time using Firebase.
Accomplishments that I'm proud of
I have implemented a basic fully functional website that can at the moment accept only one request, but I am confident in improving on it in the future. I was totally doubtful of even completing the project but now I feel proud that I am able to make atleast a basic submission.
What I learned
I have learned the challenges in making an application that has to scale up to a huge user base.
What's next for Smart Shopper
In the future the Smart Shopper will extend to pharmacies as well in order to promote a safe way to get medicines during times of crisis. The Smart Shopper will also get a mobile app companion.
Built With
bootstrap
javascript
leaflet.js
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/skill-buddy | Inspiration
Large pool of young people in our country Nigeria who want to learn a skill but struggle with motivation and a seemingly lonely learning journey.
What it does
Skill Buddy lends a helping hand by connecting skill enthusiasts to fellow learners and remind them of their daily goals.
How I built it
Using wit.ai, we trained the chatbot. Then, we used python (Django framework) for the backend which was hosted on heroku. This app was then connected to messenger through our Facebook page for accessibility.
Challenges I ran into
While we were building the app, we wanted to have a notification system that reminds our users on their daily learning goals
this was not quite easy to do as we have not implemented anything quite like that before
we were able to solve this problem using Celery
Accomplishments that I'm proud of
Major accomplishment was getting to bring together different components (i.e. messenger, wit.ai, Django framework) and make the app run successfully
What I learned
We learnt to use wit.ai.
Learnt periodic tasks used celery in Django framework.
What's next for Skill Buddy
Creating a real community of connected skill buddies and self improvement advocates.
Built With
django
facebook-messenger
heroku
python
wit.ai
Try it out
web.facebook.com |
10,007 | https://devpost.com/software/a-m7lzpr | Emotional Support
Location Services
Inspiration
When we first read the project description we thought of many different ideas for how to implement wit.ai. ChatBots, Search Engines, text-to-speech, just to name a few. What really helped us narrow down what to do is when we took a second to think about everything going on in the world. We wanted to create something that can actually positively impact the world and that's was the beginning of our chatbot: "healBot".
What it does
healBot is a chatbot designed to help users with mainly medical issues while providing recent statistics on COVID-19 and "what-to-do's". It helps users by giving tips and resources(in the form of links to respectable websites) to lead them on a path to better understanding.
For example, if you were to tell healBot : "I have pain in my back"
healBot would say: "Here's some information on back pain
https://www.mayoclinic.org/diseases-conditions/back-pain/symptoms-causes/syc-20369906
Here's info on how to treat back pain
https://www.webmd.com/back-pain/features/manage-low-back-pain-home
Hope this helps! "
(There's way more features, depending on the intent of the utterance such as emotion/find/information)
How we built it
We built the front end with Node.js, centering it around the React.js library. The back end was originally also built with Node.js, but we ultimately decided to switch to Python 3 since part of our back end logic had already been built with Python 3. The front end is hosted on Netlify, and the back end is hosted on Heroku. The front and back ends communicate via Socket.IO, which simplifies the information request process. Although Socket.IO is not natively built for Python 3, fortunately, we found a library called Flask-SocketIO which allows us to connect to our front end Socket.IO client.
As for the programming for the chatbot we built it off of the intents we detected from utterances using our trained wit.ai bot. The main intents we trained it to learn were: information, remind, salutation, find, express, and criticism. And with each intent came the specific entities to go with it. We won't go over what every entity was but give a brief rundown of what each intent does:
information: Gives users resources in the forms of links for various topics they might be looking for be it diseases or staying healthy. Also, it provides up to date statistics on COVID-19 and what to do's for COVID-19.
remind: Allows users to tell healBot to set reminders for whatever they need. Whether the user provides a timespan or a set time to be reminded. (Note although we have the chatbot set to respond properly to the remind intent, the actual reminding function has not been implemented yet)
salutation: Simply handles Greetings and Goodbyes, while detecting a name if there is one.
find: Provides users with info on nearest open medical facilities, while also providing links if the user is looking for something online that is medical related.
express: Handles when users talk about their emotions or symptoms they might have, giving either advice or links to treat/learn more about the symptoms.
criticism: Handles when users talk about the app in either a positive or negative light. We hope that in the future it can save the criticisms given.
Challenges we ran into
One challenge we constantly ran into throughout the development process was having a properly trained wit app. Too many times while training the app about one type of intent/entity it would negatively affect a previously trained entity so we would have to go back and do proper retraining. Specifically, the bot began to mark
everything
as a reminder entity.
Figuring out how to build the server and client proved to be difficult, as this was both of our first times learning web development. When we had achieved local communication, we thought we were on the brink of success—but little did we know about how difficult deployment would be. We ran into plenty of issues with synchronous processes, timeouts, and unsent requests that would require a full day of debugging. Fortunately, we managed to get past all of those hurdles, and we now have a functional website.
Accomplishments that we're proud of
We are proud of everything we have accomplished as a whole. When first starting out this project we quickly realized how daunting the task was to develop a chatbot with all the features we wanted, being capable of holding up a conversation all presented on a website in the message format. But now with a completed project to present working almost better than expected, we can't help but be proud of everything we accomplished.
What we learned
We learned that when it comes to AI for text it can be very iffy depending on how it is trained, via keywords/synonyms, or just how the way things are spelled. For example since reminders normally have a verb or the word "my" in them a lot of things that weren't reminders began to be marked as reminders. We also learned that creating a website and putting it into production is much more work than it seems.
What's next for healBot
The future of healBot can be refining its features and the wit app behind it. For one, we can implement an actual function for the bot to remind the person. Right now we are able to extract the DateTime from when the user wants to be reminded but turning that data into an actual reminder has not been implemented yet. In addition, in the future, we can also add some sort of database in order for the bot to keep track of who is who, remember names, and other important details. Additionally, we're looking to make improvements to the UI and add some additional functionality to the website.
Built With
beautiful-soup
css
flask-socketio
gevent
google-geocoding
google-maps
grequest
grequests
heroku
html
javascript
netlify
node.js
python
react
react-bootstrap
react-chat-ui
react-router-dom
react-scroll-to-bottom
react.js
requests
socket.io-client
styled-components
uwsgi
wit.ai
witai
Try it out
heal-bot.netlify.app
github.com |
10,007 | https://devpost.com/software/vr-scene-voice-builder | This is a proof-of-concept for a scene builder using only the user's voice.
How to use
The app consists of a virtual assistant to whom you can tell instructions to create and move objects in a 3D scene.
Sample instructions (parts in
italics
are optional):
create a {object description},
named {object name}
,
{object position}
move the {object description/name} {position}
An object description can be anything, as long as it has a visual representation (it searches for images on Google).
So, get creative!
Objects can be moved by specifying a direction and distance (e.g. "3 meters to the right") or a relative object and anchor (e.g. "above the tree").
Example instructions (used in the video):
create a blue car on the ground
create a green tree at the left of the blue car
move it 1 meter to the left
make it bigger
create a red car at the right of the green tree
move the red car 2 meters to the right
create an amazing guy named Max
place Max above the tree
How it works
The web app was built with A-Frame, which allows running it on WebVR-enabled devices such as the Oculus Quest.
The voice recognition uses the Web Speech API, and for speech synthesis it uses the SpeechSynthesis API.
Commands are processed and analysed using a trained Wit.ai model, in order to create, place, move and scale objects.
Images are searched using Google's Custom Search API (you may need an API key if you want to try your own).
What's next
I believe virtual reality and voice recognition have a great potential together for more efficient interactions, so I'm really looking forward to enhancing this prototype and hopefully see this kind of technology integrated into future VR apps.
Built With
a-frame
javascript
Try it out
aframe-voice-recognition-wit-ai.glitch.me |
10,007 | https://devpost.com/software/simba-bot | Bot Icon
Welcome screen
Send a greeting message like Hi to get started
Conversations 1
Conversations 2
Conversation 3: Request to handover conversation to real human
Conversation 4: NPS survey after human support marks page inbox message as done
Conversation 5: NPS survey acknowledgement after response data gets logged on Facebook Analytics
Option to call a business representative
Conversation: Request to handover conversation to real human
Inspiration
Over the years, a number of businesses have shut down due to low engagement ratio with their users. Globally, 1 in 10 businesses shut or close up due to issues relating to customer retention and engagement. The tendency to be a player in solving this issue is the key mother to the project idea for this solution, the Simba Bot.
What it does
Using the Messenger handover protocol API and quick replies, this bot is able to provide an unflinching customer support service for businesses (we partnered with DeliveryNow NG to test this beta version) whereby users can request for information about a specific business (DeliveryNow NG in this case), track orders, and reach out to a real human support via inbox (using the handover protocol API), or via phone call. The Simba Bot uses wit.ai to understand conversations, while it spurts out coded-in responses based on the expressions and intents it receives.
Even after taking back the control of a conversation, the bot sends the user a short NPS (Net Promoter Score) survey using quick replies which helps measure customer satisfaction after it takes back a conversation from a real human. The response of this survey is logged on Facebook Analytics, where the business can make meaningful decisions using the provided survey data.
How we built it
Wit.ai is used in powering the bot, it serves as the brain. The bot was setup using the Messenger Node SDK, while the whole project is written in NodeJS.
Challenges we ran into
Time was a major challenge, even though, we still work on this project day in day out, it isn't yet what we envision it to be.
Accomplishments that we're proud of
Being able to write and run many conditions against the wit.ai entities in such a small amount of time, the project's GitHub commit history is a testament to this.
What we learned
It is not enough to write code for a virtual assistant, training the bot on wit.ai felt just as important as writing the code itself. And two members of the team were assigned this responsibility of training the wit.ai app.
What's next for Simba Bot
To integrate a seamless Messenger chat extension using the chat extensions API, this should afford businesses an advantage to process customer orders, monitor inventory, and process payments for customer orders all within the bot.
We do also have plans to reinvent the bot as a SaaS solution that not only houses support for a single business, but also provides real time support for businesses - while continuing to provide powerful analytics.
Built With
facebook-messenger
node.js
wit.ai
Try it out
m.me
github.com |
10,007 | https://devpost.com/software/ask-marge | First screen on page load
Inspiration
I have a teenage son. He has a lot of questions, but he doesn't always know how to ask them. When he wants to know about sex he doesn't feel comfortable talking with his teachers. He's somewhat comfortable talking with me and his dad, but we all know those conversations can feel weird. I wouldn't want him to "just google stuff" because he could end up on adult-only websites, or maybe see some pictures he should not yet be seeing.
I believe that is the case for a lot of teenagers today. And that's why I created the website Ask Marge.
What it does
Ask Marge gives predefined answers to a variety of sex related topics. At the same time, Marge is very respectful of the user's privacy. It does not save any cookies or cache.
The second feature of Ask Marge I am happy to present, is the unopinionated avatar. Website users can freely choose an avatar for them and for the bot. I think uncomfortable conversations can be more comfortable if we get to decide who we have them with :)
How I built it
I built Ask Marge as a website on top of Create-React-App. I deployed it with Netlify.
I contact Wit.ai through fetch requests.
Github Repo link
Challenges I ran into
The biggest challenge is trying to anticipate the user's questions. I wanted to make sure I include as many utterances as I could possibly think of. I am not a native english speaker, so I hope I didn't forget too many things.
Accomplishments that I'm proud of
I'm very proud of the accuracy of my Wit.ai bot. I also think the UI of my website is going to be appealing to teenagers.
What I learned
I learned a lot about Machine Learning! I didn't know it was that simple to train a bot.
And I loved the satisfaction of adding a slightly changed utterance and seeing my bot recognize the intent automatically.
What's next for Ask Marge
Ask Marge needs a more mobile-friendly version. I would also love to add a little Information icon in the top right corner, where the users can click at any time and read what the page is about again, and where I would add links to all the great cartoon images I used.
Built With
create-react-app
netlify
react
Try it out
ask-marge.netlify.app |
10,007 | https://devpost.com/software/ayur-bot | Facebook page where we can chat with the bot.
AyurBot application in the developer.
Chatbot in Wit.ai
Inspiration
Health is wealth . Being healthy is the foremost important way to lead a happy life. Especially due to Covid-19 pandemic,Every one’s life is at risk. There was a lock-down, Markets were shut. Still Covid-19 is affecting the health of our near and dear. The only precaution we left with is to improve our immunity and health. Hence we provide you an innovative way to tackle this situation by introducing a new friend Ayur Bot.
What it does
Ayur Bot is chat based assistant that can guide for improving health naturally via ayurvedic recipes or treatments.
How I built it
I built bot with help if Wit.ai. Added models and functions to complete functioning of the chat-bot. I have integrated with facebook messenger. I have used threading to implement the subscription features.
Challenges I ran into
I ran into some environment set-up problems as I was new to Wit.ai. Then I encountered some issues with setting up the chat-bot response by server. Then some more issues at the time of setting up the threading.
Accomplishments that I'm proud of
The whole implementation of chat-bot is a bit of proud accomplishment. The integration with Facebook is another proud moment, as I can advice my friends about ayurvedic remedies through my Facebook page.
What I learned
I learned about more about bots, Wit.ai, Facebook messenger integration, threading concepts.
What's next for Ayur Bot
I will integrate with other platforms as well. Then I will enhance the data set for giving remedies and treatments for much more diseases. Then I introduce additional features like live monitoring by connecting with IOT
Built With
css3
facebook-chat
facebook-messenger
flask
html5
machine-learning
natural-language-processing
python
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/covidbot-gfuxq4 | Inspiration
I've seen the damage misinformation about Coronavirus has done to families across the world. Thousands of people have lost their lives and many more are infected because they could not get accurate information about Covid. That was the inspiration behind this chatbot. I also wanted to get hands on experience with building an AI application with the skills I presently have. The prize money was also a good incentive for me as I am saving up for further studies.
What it does
CovidBot is a chatbot built using wit.ai NLP that provides information about Covid19 to users in real time based on their questions and inputs.
How I built it
Initially, I used my local machine, node.js and ngrok(as local webserver host) to build and test the app while integrating it with the wit.ai NLP. It was difficult for me getting the POST and GET requests to be successful as I was using my local PC as web server and it was my first time working on backend programming. Later, I used Glitch as a webhook to tie everything together; the wit.ai NLP, Node Js environment and my Facebook messenger.
Challenges I ran into
First one was setting up a local server on my machine using node.js and getting my GET/POST requests to return back a successful message.
Second one was when I was trying to integrate my webhook into my Facebook app, I didn't get that quite correctly the first time, so I ran into some errors. But thanks to helpful tips from some members of the Hackathon group, I was able to handle it.
I also had challenges with getting people to test the app before it was approved. That was because I wasn't conversant with using app "Roles", but when I read the documentation on it, I was clear.
Finally, having to write a privacy policy for the messenger app as part of the requirements before the app was finally approved was new to me. I took quite some time to research and come up with a pretty good one.
Accomplishments that I'm proud of
I am proud I've been able to create a Facebook app for the first time, though it's just able to answer questions and give information, the challenges which I overcame while trying to tie it all together using a webhook taught me a lot about how web servers and applications really work, so I am proud to have done something in this field.
What I learned
A whole lot of things like I have already mentioned about web apps, application of AI to practical problem solving. I also learnt about some critical thinking and problem solving techniques in the process.
What's next for CovidBot
I'll keep improving it and explore other ways I can integrate much more complex abilities into it. I will also explore other ways I can build a similar chatbot for other use cases and topics applying what I have learnt.
Built With
facebook-messenger
glitch
node.js
wit.ai
Try it out
m.me |
10,007 | https://devpost.com/software/auto-response-bot | AutoModerator in Twitch Chat
AutoModerator in command line test mode (censored)
NOTE: VIDEO AND PHOTOS HAVE BEEN HEAVILY EDITED TO CONFORM WITH COMMUNITY GUIDELINES. NOTHING SAID OR WRITTEN REFLECTS MY PERSONAL VIEWS AND IS PURELY FOR DEMONSTRATION PURPOSES. IN AN IDEAL WORLD, RACISM, SEXISM, HARASSMENT, ETC SHOULD NEVER EXIST AND A NEED TO DETECT THEM WOULD NEVER BE NEEDED.
Inspiration
As a twitch streamer and viewer myself, I have seen first hand the amount of time moderators spend answering questions and the necessity of having good moderators in chat. A toxic environment can easily occur in small-medium streams without the moderator manpower to stop it. These toxic environments drive away potential viewers but more importantly, make people uncomfortable and not enjoy themselves.
What it does
This is where AutoModerator comes in. By using Wit AI NLP, I was able to answer generalized FAQ's without the use of specific commands that other bots use. This allows new viewers who have never set foot in that stream before, get the answers they are looking for without using specific commands. A user can simply ask
what song is this?
rather than use commands that could vary from
!music, !song, !playing
and many more.
On top of this, AutoModerator can detect and punish users accordingly based on toxic chat messages. The current iteration can capture generally light toxic comments like 'you suck' to more egregious messages that delve into racism, sexism, and harassment. As it stands, AutoModerator mostly times out users that post these more extreme messages, however, based on the levels of toxicity, AutoModerator can even permaban the user from the chat.
How I built it
I used TwitchIO to create a simple twitch chatbot and set it up with my own Twitch channels chat. After creating this, I used Wits Python library to connect my chatbot to Wit services and set up the sending and receiving of messages and responses. I would pipe the messages from the chat into my chatbot then to Wit and using the responses from Wit, curated multiple automatic responses based on the return from Wit.
Challenges I ran into
As it was the first time for me making a Twitch chatbot, getting all that set up was a small roadblock on my journey towards completion. The largest hurdle I ran into was using finetuning Wits responses and predictions. Even now, I am not completely satisfied with the sensitivity of toxicity and will be working on it more after the results of the hackathon.
Accomplishments that I'm proud of
I built a pretty good automatic moderator for Twitch chat and with some tweaking, will actually be implementing it in my own streams.
What I learned
I learned a ton about Wits platform as well as setting up chatbots for Twitch. I also got to dip a little into Wit entities and traits even though I didn't make full use of them.
What's next for AutoModerator
Like I said before, once results are released and I can get back to work on it, I'd like to finetune some of my toxicity classifications. I also want to bring it to other platforms such as Facebook Gaming and Youtube Live. I will be looking into creating bots for those systems as well as adding a bit more functionality for less gaming-focused live streams.
Built With
python
twitchio
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/google-chrome-voice | Inspiration
Doing mundane tasks is boring, so why not speak to Google Chrome and tell it what to do. Whether it's something easy like opening a new tab or googling something, get it done using Voice. This could simply be the Alexa or Siri for Google Chrome.
What it does
It's a Google Chrome Extension that simply performs tasks that you ask it to do. Currently limited to a set of actions like setting the background colour or image and then resetting it, and also opening and closing tabs. The next action to add was to ask it to Google something for you and show you the results.
However, this Chrome Extension could also help people that struggle to browse the web, perfect for people with limited hand movement or mobility.
How I built it
As this is a Chrome Extension it is built fully with JavaScript, HTML and CSS. In the future, we could build an API where people can create accounts and have settings, which change the experience using the extension.
Challenges I ran into
The first challenge I ran into was how to create a Chrome Extension. How to structure the project and how to build the files to use JS dependencies was the top of the challenges with setting me back a lot of the time. However, now I know more about the structure and more about building JS and CSS using Webpack the next Chrome Extension should be a breeze.
Accomplishments that I'm proud of
Training the AI was a new experience as I have never really jumped into AI projects. Using Wit.ai was fairly simple to use and seeing the bigger picture when training what should happen really helped.
What's next for Google Chrome Voice
The next stage for Google Chrome Voice is to firstly add more actions so you can do more with it, but secondly make it more customisable so the user can change default options like which Search Engine the would like to use when searching the web. This would require a database and an API to make requests to which would increase the complexity of this project.
Built With
chrome
css3
google-search-api
html5
javascript
jquery
npm
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/mystery-buff-ma5rgc | To start the game
Tracking User state
Inspiration
Inspiration to be a writer takes its roots from my mother who is a renowned writer in my hometown. Thinking of the pivotal point was a challenge. Moreover, the flow of the story should be like a tree structure, giving 2 or 3 options to choose from, at every point. Moreover, the story should have multiple twists intentionally to confuse the intended audience. The story should be short and should fit into the app being designed. So I have to think from every perspective, so as the user would be driven further based on their choice. Moreover revealing the killer without any clue would be pointless. So I made sure that the clues for the cause of the victim's death were placed at every point of the story. The whole point is to make the user, engaged throughout the flow of the game. And I hope we were able to achieve it.
What it does
People like to solve mysteries. And there are only a few games online that are thought-provoking. With the invent of Alexa & google assistant, people enjoy asking questions and getting back the answers. Chatbot works the same way. So we decided to develop a chatbot with more and more mysteries to solve
How we built it
We used Python as a backend server that connects the messenger and Wit.ai. User input from the messenger will be sent to Wit.ai server for the NLP processing. Based on the Wit.ai understanding users will get follow up questions in the game.
Challenges we ran into
1) Making the chatbot interactive at the same time making it descriptive and detailed for mystery solvers.
2) Connecting Wit.ai with Facebook messengers, because example codes which are given in the GitHub was having a bug. It didn't work as excepted.
3) In the quick reply, keeping track of what users have already selected.
4) Finding the right feature engineering grouping for training wit.ai NLP.
Accomplishments that we're proud of
End to end to implementations.
Working knowledge of Wit.ai NLP features.
Quick reply state maintenance.
What we learned
How to create an efficient and interactive chatbot.
What's next for Mystery Buff
Add complex and challenging mysteries.ff
Built With
amazon-web-services
natural-language-processing
python
wit
Try it out
www.facebook.com |
10,007 | https://devpost.com/software/tea-time-chat | Tea Time!
Inspiration
It is a good idea to journal to reflect on your day and express yourself, but what if you would rather talk it out? I thought it would be a cool idea to make a chatbot that serves as a journal that you make entries with by chatting with it.
What it does
You chat with the Tea Time chatbot and then it will generate reports for you to view as your journal. You are able to see how you were feeling from past days at a glance and see topics you mentioned.
How I built it
Messenger chatbot with Wit.ai for the chatbot. HTML, CSS, JavaScript for the visuals. JavaScript for the functionality.
Challenges I ran into
The biggest challenge was getting the chatbot up and running. Also, I wasn't the most experienced in JavaScript, but I was able to figure out things I never thought I would.
Accomplishments that I'm proud of
I am proud that I never gave up in making the chatbot. I felt like giving up tons of times, but I kept coming back.
What I learned
I learned that AI is accessible for anyone to try and that I don't need to be an expert to use it.
What's next for Tea Time Chat
Improve the chatbot to be less chatbot-y
THE USERNAME AND PASSWORD TO ENTER
USERNAME: admin
PASSWORD: 123abc
Really secure, I know.
Github:
https://github.com/chiuannica/teatimechat
The Actual Thing:
http://teatimechat.herokuapp.com/
Skip the login and get to the reports:
http://teatimechat.herokuapp.com/home
Built With
css3
facebook-messenger
html5
javascript
wit.ai
Try it out
teatimechat.herokuapp.com |
10,007 | https://devpost.com/software/flux-c2u8ki | Landing
Radio
Inspiration
As a music lover with experience in accessibility, I was excited to build something that could combine my passion for music with the ability to help those with disabilities.
What it does
Flux is a fully voice controlled interface for Spotify. Users can use voice commands and the site will interpret the request and fulfill it in an open Spotify player. For example, a use can say "play some Kanye", and Flux will query Spotify for songs related the the term "Kanye", users can also be specific by asking something like "play rocket man by elton john". Flux can also handle requests related to various audio features that are tracked in Spotify, so the request "play a lit song" will query songs with high danceability and energy. Additionally, Flux has control of your Spotify playback and can pause, play, and skip songs at your request.
How I built it
The web application is built with an AngularJS frontend and Node.js for backend functionality. To extract audio and process language I used JavaScript MediaStreams and sent the data to a wit.ai instance. I Leveraged the Spotify API to handle song recommendations and interfacing with active devices. The microphone audio visualization is built with p5.js.
Challenges I ran into
Having never used wit.ai or any sort of natural language processing tools, there was some learning curve in figuring how to set up my wit instance and how to train the ai.
Accomplishments that I'm proud of
I was proud of how quickly I learned new technologies and was able to use them to build a real product. It was really exciting to build something that I want to use on a daily basis.
What I learned
I learned a lot about wit.ai and training instances, plus lots of details with asynchronous functions with node.js API calls.
What's next for Flux
I hope to expand Flux's understanding of more complex requests and compound phrases. I also want to find a way to bring Flux's functionality to a more widespread platform or a completely internal player, rather than relying on a secondary application.
Built With
angular.js
express.js
javascript
node.js
p5
p5.js
spotify
wit.ai
Try it out
fluxdj.herokuapp.com
github.com |
10,007 | https://devpost.com/software/inup-ia-online-accelerator-based-on-ai | social graph signal of social algorithm
** Inspiration**
Our mission its rise the level of surviving from 2% to 8% of the startups market in an early stage.
What it does
The main task we set ourselves is to help young startups properly present their projects, find their first clients and enter the financing stage, and most importantly, based on algorithms, minimize investment risks for investors.
The service is primarily designed for investors, it operates on the model of a syndicate.
The syndicate is a new model of investing in start-ups at the early stages. Where you invest together in selected projects directly, no commissions from transactions, neither from investors nor from projects.
The InvestStartup.Club platform works as a software as a service. It is a business model where the software product is provided as a web service on a subscription basis.
For startups, registration on the platform will be free. So far, you can access the accelerator only by invitation, for this purpose you need to register and apply on the site.
How I built it
The new platform is based on a microservice architecture aimed at the interaction of small, weakly connected, and easily modifiable modules - microservices. The platform algorithms are built using artificial intelligence and chatbots and do not depend directly on what you do.
A functional feature of the platform, similar to the Applicant Tracking Systems service, is a computer program that automatically manages some stages of recruitment, especially at the stage of screening unsuitable candidates from thousands of incoming resumes.
What's next for InUp.ia - online accelerator based on AI
Next step InUp.ia will provide smart and adaptive AR pitch deck templates and lessons for entrepreneurship educations.
Built With
apache
apollo
ar
electron
hooks
isomorphic
javascript
particle
react
ssr
universal
webpack
wit.ai
Try it out
investstartup.club |
10,007 | https://devpost.com/software/nuwen-project | Inspiration
With COVID-19 the number of students that starts online classes grew up in an exponential way! This situation imposes new challenges to these students and their parents who must pay attention in the deadlines of a series of assessments and activities that are developed in a learning platform.
Sometimes it is difficult to get just a specific date from this learning platform because you need to login and start searching the information that you need.
It would be easier if you can just talk to this platform and get information.
One of the most used learning platforms is Moodle, an open source application that have spread over the World and nowadays several educational institutions use.
So, with the help of
Wit.AI
we construct a Facebook Messenger interface that allows students and their parents to get information in simpler and intuitive way.
What it does
It implements a Speak and Natural Language interface to Moodle. Moodle is an open source learning platform. With the help of
Wit.AI
and our interface it is possible to get information from Moodle using the Facebook Messenger and natural language.
How We built it
We built it around an AWS Lambda Function written in Python. The idea of using the AWS Lambda Function is to reduce costs (because the interface runs only when it is invoked by Messenger) and to give the interface the ability to run 24 hours 7 days a week.
We trained the Wit.AI interface with common utterances said by students to get information. We use subjects’ names like Grammar and Mathematics to train the entities.
Challenges I ran into
Moodle has an API that allows connections from outside applications; however, it is difficult to find out documentation. Another difficult was to train the Wit.AI application with the right sentences.
Accomplishments that I'm proud of
I am proud of the speed of the response that we get in obtaining information from Moodle. It is much easier to start a small conversation in Messenger to get deadlines, for example, than the need to access the Moodle site to retrieve a so simple data.
What I learned
I have learned the use of Wit.AI that is an incredible platform to process natural language. It is so easy to work with it that I get impressed.
What's next for Nuwen project
Translate it to new languages, one of them is going to be Portuguese, our natural language. The other issue is to spread his use, Moodle have spread all over the World and we think that several students can make use of the Nuwen Project.
Built With
amazon-web-services
lambda
python
Try it out
www.facebook.com |
10,007 | https://devpost.com/software/sova | SOVA Screen Captures
Inspiration
Since mobile voice assistants are currently used by many people, the improvisation of this system is needed. Sometimes people get bored with a plain voice assistant or flat chatbot. Also for some cases, most of all voice assistants are too general, which can't provide specific cases with more details.
What it does
SOVA is an intelligent assistant with 3D visuals that can be more entertaining compared to other plain text based or just voice assistants. You can interact with this virtual assistant using your voice and it will respond to you and act like a human being. This application also specifies some cases so it can be more detailed if we want to ask about something. Example : for this prototype version, we have our first case "COVID-19 Assessment". You can ask SOVA to help you on COVID-19 assessment in more detail, and it will try to calculate the result and give you some suggestions after the session.
How I built it
Wit.ai as our intelligent system for Natural Language Processing
Unity as a tool to build the application
iClone to create and modify the 3D Avatar
Facebook SDK for login needed.
Challenges I ran into
Maybe we can say about integrating API from Wit.ai to Unity, we need to do some adjustment regarding the JSON file. Also to make the Natural Language Processing can make sense even more, because we have to keep training it so it can recognize what our intentions. This is the first time we built NLP from scratch so maybe it's not that really good for now, but we are still improving it.
Accomplishments that I'm proud of
Happy to accomplish an Artificial Intelligence project with a visual assistant like this, and this can help people in some ways.
What I learned
Learn about Wit.ai itself and learn how Natural Language Processing works.
Learn how to integrate Wit.ai and Unity (How they communicate with each other)
What's next for SOVA
If possible we will improve this more accurately and add more cases like education, financial and other fields.
Improve how the avatar will interact with the user (expression, animation etc)
Built With
api
facebook-login-api
iclone
json
speech-recognition
unity
wit.ai
Try it out
sova.rgplays.com |
10,007 | https://devpost.com/software/wit-bizz | Inspiration
The first thing that arguably every potential customer does to verify your business's authenticity or to get a general idea is to check your website. You need a website more so if yours is a small business with limited resources. This makes it imperative that you create a website that contains information about your business and allows stakeholders and everyone else to get to know your business a little better. Apart from marking online presence, having a website have other benefits like :
Advertising
To keep your customers up-to-date
24×7 availability
Expand & Improve your business
Increase Sell
Lead generation and customer’s feedback
In India, SMEs (Small and Medium Enterprises) cover almost 95% of the country's total industrial units. But the digital literacy of these SMEs is not good enough to enable them to develop their websites. Although there are many Website building sites like Wix.com, they are quite complex for a non-tech person.
What it does
Wit.bizz : The power of Wit.ai and WhatsApp Business into one.
It is an AI bot working at your service to create a website for your business in just one chat. Wit.Bizz has the motive to help all the SMEs and individuals who are not technically literate. It will help them launch their business website by just asking a few questions.
How will it ease the present situation ?
Easy Accessibility
As WhatsApp is the most frequent and easily usable app, we added the Wit.Bizz bot as an extended feature to WhatsApp Business.
No need of technical knowledge
Just answer simple questions and voila your site is ready. No deployment hassle and reduced complexity.
NLP Driven
Wit.Bizz uses Wit.ai model to train it for user responses. Since our target audience is technically not well versed hence using a well trained NLP model would help us understand the needs of user.
Steps
Open a WhatsApp Business Account.
Our bot ,Wit.Bizz will automatically send you the welcome message.
Continue chatting with the bot to personalise your website.
After giving the basic information, you can ask the bot to deploy the site.
The bot will return a custom URL for your business.
Modifications can be made after deployment as well.
Features
Build your own website in just one chat!
NLP driven Website builder bot
Seamless integration with WhatsApp Business
Customize themes and sections
Notifications
Customers Enquiries Addressal
Get monthly customer engagement Statistics
How we built it
WhatsApp Business API :
We used Twilio to send and receive WhatsApp messages
NLP :
We used Wit.ai to perform NLP on received messages.
Website Deployment :
We used Heroku CLI for website deployment
Server Hosting :
We used Digital Ocean droplet for server hosting.
Challenges we ran into
The difficult part was to host the custom website for each user so we used Heroku CLI to host the website from a digital ocean droplet.
Accomplishments that we're proud of
We are proud of the fact that we brought our idea to help SMEs launch their businesses online easily was successful. With our bot, any businessman registered on WhatsApp Business could have their website without spending money or relying on anybody.
What's next for Wit.Bizz
Adding products through an excel sheet (to be filled by user)
Receive transactional notifications
All the queries of contact form can be redirected to WhatsApp.
Train wit.ai model to support more website modifications
Built With
css
ejs
html
node.js
twilio
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/fdsfas |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Language and translation
Since our team is based in Brazil and our potential users are based here too, we decided to build our prototype in Portuguese. You can see the translation for the terms used on the bot
here
.
Inspiration
Glaucoma is the leading cause of irreversible blindness in the world. It is estimated that there are 79.6 million
[1]
people with glaucoma worldwide and the American Academy of Ophthalmology estimates that this number will increase to 111.8 million in 2040
[2]
. The main clinical treatment to prevent blindness caused by it demands daily usage of eye drops correctly and commitment with the follow-up. But in a lot of cases, because the disease is initially asymptomatic, they don’t feel motivated to adhere to the treatment. In some studies
[3]
, it is reported a treatment dropout rate of 60.5% within 1 year of follow-up. We believe that a virtual assistant that helps patients to get reliable information on the medication and disease could help to improve adherence to the treatment. The information that Nery gives was validated by our team member John, who is an Ophthalmology resident in a university hospital in the major city of São Paulo and by team member Juliana, who is a pharmacist. Nery is the name of the
first Brazilian nurse
, so our virtual assistant is named after her.
What it does
Since we had a limited time to build it to the hackathon, right now Nery helps glaucoma patients to get information on their medication such as side effects and instructions on how to use it in an interactive way. We want to add features that will help the patients to buy their medicine, to remind them to use the medication and help them to schedule regular appointments with their ophthalmologists.
How we built it
For the natural language processing, we created an application on wit.ai and trained it with common questions on medication and common medicine names. We stored desired answers for those questions on a database (DynamoDb) and created an AWS lambda function (with Python) to access those answers according to Wit.ai results sent through Facebook messenger to ours API gateway. We chose to do it though Facebook messenger because we don’t want the patient to download any extra app to their phone or access any other site that they are not used to. We respect user’s privacy and for this prototype, no user information is stored
Challenges we ran into
The main challenge was to understand the core of patient’s problems and match with the features of Nery. Regarding their problems, our validation process pointed out that a lot of them don’t know how to use the eye drops properly (technically, fail to recall)
Accomplishments that we are proud of
We received an award of 4000 brazilian reais (something between USD 500 and USD 1000) for our project from the Medicine department of the University of São Paulo, Brazil. (there wasn’t any prototype by them, the prototype was built
entirely
during this hackathon)
What we learned
Among other things, we learned how to use Natural Language Processing (NLP) to provide an interaction with the user as seamless as possible. We didn’t want to use buttons, we want to use free text because we believe this way we can give a better user experience for our users. So we learned a lot on how to deal with natural language variation on Wit.ai platform.
What's next for Nery
We want to implement the missing features and validate our final prototype with patients. There is a list of 50+ patients that are interested right now on testing it. We would like to implement it on Whatsapp as well, if it’s possible. In the long term, if we intend to use our tool to help patients with other health conditions.
Built With
amazon-dynamodb
amazon-web-services
facebook-messenger
python
wit.ai
Try it out
github.com
www.facebook.com |
10,007 | https://devpost.com/software/follow-your-passion-end-recession | The home page
The contact us page
The categories page
Inspiration
Given the current scenario of mass recession due to the widespread pandemic people have lost their jobs and some of them unfortunately have no source of earning at all. Seeing this my friends and I thought of making a website that could make people aware of the various options they stiil have for earning by having their own startups even after losing their jobs which can help them sustain themselves in this crisis.
What it does
It makes people aware of the various startup options they can choose to start earning and make them entrepreneurs.
How I built it
We built it using html, css, javascript and php.
Built With
css
html
javascript
php
Try it out
startyourown.000webhostapp.com |
10,007 | https://devpost.com/software/ai-can-do | Home Screen
After adding tasks
Before adding tasks
Inspiration
We got the idea after trying to find a solution to automate the process of creating reminders and to-do list. As we use these systems quite a lot in our day to day life, we felt that the current solution was quite clunky, requiring the user to type and create these reminders. As such the idea of creating an AI bot that listens to what you have to do and creating these reminders for you was what lead us to start on this project.
What it does
This simple to-do list web app can authenticate users and take in text or audio input to identify keywords in your queries and add it into your very own to-do list.
The web app can be accessed here:
https://ai-can-do.web.app/
How we built it
Materialize CSS Framework
The materialize CSS framework was used to build a simple and user-friendly website to run our application for both PC and mobile users.
Firebase Serverless Framework
The entire backend workings of our application is handled by Firebase, ranging from user authentication to retrieving the todo-list from our firestore database.
Firebase database rules are configured to ensure clients can only access their documents and not others'.
Wit.ai
We trained a custom NLP model on the Wit.ai platform. This AI was able to take in both text and audio input, processing and extracting both the intent and title of a reminder. With this, we were able to automate the process of creating reminders by allowing users to simply talk to the app and the app would process this information, creating a to-do list of reminders for the user.
Overall, the front-end is hosted using firebase serverless framework. Then, HTTP requests are made on the client-side to communicate with our Wit.AI client to facilitate the intent and entity extraction.
Challenges we ran into
CORS policy prevented us from making HTTP requests from the client-side from fear of falling victim to XSS attacks
we used the cors-anywhere API for a temporary workaround
https://cors-anywhere.herokuapp.com/
Permissions to access audio mic on mobile users do not prompt automatically and users will have to go into their browser settings to enable it manually
Permissions are automatically prompted for PC users
Accomplishments that we're proud of
Our Wit.AI chatbot is correctly identifying
most
keywords in the queries
This entire project was built and finished in less than 2 days
What we learned
Wit.ai toolkit
Learning to create simple and powerful NLP models using wit.ai framework
Structuring our project around the capabilities of the wit.ai model
Implementing Wit.AI functionalities into our web application through API calls
How to record audio input from browsers
- Authenticating and updating documents in our firestore database through a serverless framework like Firebase
What's next for AI Can Do
Include commands the AI can detect to facilitate functions like finishing or deleting a task through audio input
Implement our application with Google Calendars, a widely used and popular calendar so that tasks can directly be added there
Add more configurations to each task
Eg. due date, type of task, additional descriptions, adding teammates (other users) to tasks etc.
The AI will also be trained to identify these configurations
Enable Facebook sign-in
A Facebook for developers account will have to be set up to obtain the app ID and secret to enable this feature
Built With
axios
css
firebase
html5
javascript
serverless
webapp
wit.ai
Try it out
ai-can-do.web.app |
10,007 | https://devpost.com/software/lingobuddy | The LingoBuddy Story Dashboard (Desktop Web App)
The Story Designer (Desktop Web App)
Stories Page (Mobile Web App)
Play Page (Mobile Web App)
Story Feedback Page (Mobile Web App)
Inspiration
Voice is the most natural form of interaction for humans. We use voice to communicate our needs and express ourselves to the fullest. We participate in so many conversations daily on a wide variety of topics. The goal of the project is to build voice interactions that can serve two important purposes:
1) To help practice reading and speaking to improve conversational skills in a language of choice.
2) A medium to build & play Choose-Your-Own-Adventure Games based on voice.
What it does
LingoBuddy is a complete end-to-end platform for the creation and consumption of interactive voice-based stories. It allows an individual to create interactive & immersive stories through the story designer and share it with the world. At the same time, anybody on a mobile device can play the stories wherever they are and whenever they want.
It provides a medium to help with language learning, as it is vital to practice a language by taking part in conversations. The ecosystem can also be utilized to create choose-your-own-adventure games for all ages. It can even be used by teachers to make any learning more interactive and fun. The possibilities for such a platform are endless!
How I built it
The most crucial element of the entire ecosystem - speech recognition - is powered by Wit.Ai. Separate apps have been created on the Wit.Ai platform for different languages. For the mobile web app, the Wit.Ai library has been utilized to enable streaming on WebRTC & WebSockets. The LingoBuddy ecosystem can easily work with all the languages supported by the Wit.Ai platform (though currently, I have added the tokens for three languages: English, German, French). To add a new language, one can create an app on Wit.Ai, extract the Client Access Token and enter the language and the token in the language variable in the index.html file. It is as simple as that.
For the database and hosting, we are using Firebase (though it can easily be deployed on any system).
The whole web app (designer and the mobile website) does not rely on any backend system and has been entirely developed by using web technologies (HTML5, JavaScript & CSS).
_ Note: To create stories and see the dashboard, please use a chrome browser on a desktop environment. For playing the stories, kindly use a chrome browser on a mobile phone. _
Challenges I ran into
The main challenge was to create a UI/UX that feels clean and functional. It took a lot of experimenting with many design iterations to get the final version ready.
Another challenge was to work with the speech recognition bit as that is the soul of the ecosystem. Initially, the plan was to record the audio and then send the audio data to Python backend for processing through the Wit.Ai library and then return the result. But this would have degraded the experience and make it a little less friendly for the developers that may want to expand this ecosystem in the future. Luckily, I was able to work with the Wit.Ai microphone library, which worked smoothly (after a little tweaking, of course).
Accomplishments that I'm proud of
To create such a complex ecosystem that only works on web technologies!
What I learned
I have learned a lot while building this project! I have worked on a similar tech-stack before but never to this detail. Such a complex ecosystem requires the handling of many scenarios taking into the perspective of all the players. I learned the power of Wit.Ai, and it has tremendous scope to expand on.
What's next for LingoBuddy
From my experience building this project, I understood the following additions could be considered:
1) Adding text-to-speech so a person can also improve his/her listening skills.
2) Add more types of blocks to the designer to make it more engaging in many more ways.
Built With
bootstrap
firebase
html5
webrtc
websockets
wit.ai
Try it out
github.com
lingobuddy-be6cf.web.app
lingobuddy-be6cf.web.app |
10,007 | https://devpost.com/software/sarah-plwz7d | My avatar talking to Sarah.
Inspiration
To teach the English curriculum in a virtual classroom with AI driven avatars seemed like fun.
What it does
Sarah is an avatar in a virtual 3d environment. You can click on her and ask questions on the US spelling curriculum for K1-6. She then reply's with words and information. eg: "What words are in year 3, term 2, week 8." or "Tell me about year 5, term 1, week 7".
There is 1-6 years, 1-4 terms per year and 1-10 weeks per term. That makes for 6 years of lessons comprising of 240 lesson plans. All accessible with just 2 questions.
How I built it
It is built with a wit.ai agent and an avatar added to the LearnBrite platform with NodeJS middleware building the responses.
Challenges I ran into
Hooking up these various platforms to work together.
Accomplishments that I'm proud of
Getting this all working and creating a virtual classroom! Also building an entire educational app series based on the same curriculum with the SpellNow series.
What I learned
Cool tools and solutions like wit.ai and LearnBrite are available for programmers to use in awesome innovative ways. Building things quickly with these tools is amazing and accessible to everyone.
What's next for Sarah
I would like to one day see Sarah teaching in classrooms around the world.
Built With
javascript
learnbrite
node.js
wit.ai
Try it out
app.learnbrite.com |
10,007 | https://devpost.com/software/glowbom-chat | Glowbom is the first no-code platform that lets you create software via chat, using just your voice.
Built With
dart
firebase
flutter
kotlin
swift
unity
Try it out
glowbom.com
github.com |
10,007 | https://devpost.com/software/vedika-virtual-health-assistant-covid19 | Inspiration
Given the current coronavirus pandemic the world is facing the need is to reach masses and educate them.
What it does
The Virtual assistant does a lot of things like :
1 Self-assessment Test
2 Live COIVID19 world stats tracker
3 FAQs
How I built it
The solution is built using python and flask web flask web framework. It uses two Facebook platforms messenger for frontend and wit.ai as NLP engine. It is deployed on google cloud.
Challenges I ran into
There were no technical challenges. The challenges we faced were more to do with how presentable the whole user experience can be. We decided to give a human-like face and dress up to the virtual assistant and named her as "Vedika", Vedika is a Hindi word which means "full of knowledge".
Accomplishments that I'm proud of
I believe the solution has the potential to make a real impact in our community and that is what really makes it extra special. Apart from this, we were able to very efficiently use both the Facebook platforms and integrations went smoothly.
What I learned
Though I have build chatbots in the past this was my first time using Facebook platforms for it.
What's next for Vedika -Virtual Health Assistant (COVID19)
I believe this is just a beginning for Vedika. Need to further extend her knowledge base. Apart from this I also wish to integrate her with live chat functionality where doctors can then take over a conversation when Vedika flags a user who is at high risk.
Built With
facebook
facebook-messenger
flask
gcp
python
wit.ai
Try it out
www.facebook.com |
10,007 | https://devpost.com/software/engage-the-you | My Wit App for Facebook AI Hackathon
Inspiration
A few months earlier, when I was sitting with my grandparents, I felt an absence of some mental support as everyone in my home is busy in there work like office and all. This situation reminded me to make a chatting device which tries to talk with them about how they are feeling and other new talks like how its to be in an amusement park, what's going in France currently etc.
What it does
It just talks Continuously like a human, and it is not designed for narrow-minded tasks (ex, Siri). It's innovative in his talks, and there is no wake word every time to speak.
How I built it
I built it using the API of Wit.ai where I created various utterances in the App page, and used the API in Python to call them.
Challenges I ran into
One of the challenges which I faced was adding the continuous utterances so that the bot remains fluent and doesn't stops.
What I learned
Honestly, it was my first experience with Wit, so I learned various new things in which the usage of Wit API is the main.
What's next for Engage the You!
I am trying to add various examples for my bot to become proficient and make lives of old people better.
Built With
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/mitra-your-friend-in-crisis | Our Inspiration
The current lockdown situation as well as the last year's floods in our region questioned us, *
"How can one get help in such Emergency situations?" *
The last year's floods here moved thousands of people and few people even lost their lives. But, the most important part is that: People from small villages couldn't get help easily be it in terms of food, shelter, healthcare or anything else. Similar, situation arose during this COVID pandemic.
So this time we thought of finding a solution to it, how help can reach even the remotest of remote and poorest of poor.
After a lot of brainstorming developing a chatbot was found to be the most feasible solution. Using this chatbot we tend to provide all kinds of emergency services. Our chatbot would be like a friend in urgent crisis.
Hence,
MITRA
(marathi word for friend) was born !!
What it does
Emergency situations don't come announced hence it is essential to have necessary information handy at times. Mitra our bot helps to address this matter.
*
Mitra does the following tasks: *
Provide COVID symptoms and precautions.
Provide statistics of COVID cases for all states of INDIA and all cities of Maharashtra.
Provide a list of NGOs/ Hospitals/ Shelters/ Ambulance locations for
Sangli, Kolhapur, Pune, Solapur and Satara.
We confined ourselves to these cities as they fall in our neighby regions and within the limited timespan we wanted to create something a lot effective rather than creating something of less impact but large scale.
Provide all the required emergency helpline numbers.
Suggest light and easy
YOGA
exercises for mental peace and health. (Credita: Isha Foundation).
Suggest curated playlists of soothing music for better relaxation. (Source: Spotify Music).
How I built it
Used Facebook's
Messenger
as interface.
To extract required intents and entities used Facebook's
Wit.ai
NLP platform.
To get COVID statistics used
IndiaCOVID
API.
Additionally, developed a
Website
which provided complete info about our bot 'Mitra'.
Used
Messenger plugin
in website to directly interact with chatbot.
Response generator which generates user responses based on intents and entities identified as well as the website is based on
Flask
backend.
Both the flask apps are deployed on
Heroku
cloud.
*
System Diagram of Mitra: *
Challenges I ran into
Creating accurate intents and entities necessary for smooth conversation.
Developing the necessary information based on identified entities which included extracting necessary information from excel files (pandas manipulation) and representation on chatbot.
Writing proper script to parse humongous information given COVID-19 India API.
Integrating all together with flask backed and cloud deployment.
Accomplishments that I'm proud of
Satisfied with the amount of code and analysis we could do in a week.
Overcoming each and every error or hidden bug we found.
Created an end-to-end project pipeline.
Learned a lot of things from shelter heads while data gathering about how the poor live their condition and struggle.
(Not an accomplishment but surely an eye opener!!)
What we learned
First and foremost, a proper analysis and on paper design always helps and eliminates most of the bugs instead of coding directly. This is the new habit we have learned during the project.
Code documentation and commenting habits.
Learned basic NLP along the way like intents,entities, tokenization, bag of words, etc
Learned about APIs, webhooks and integration.
Learned hosting webapps.
Finally learned a variety of facebook products which I was completely unaware like
Wit.ai
(an excellent NLP product after the new refreshment) and
Messenger
API.
What's next for Mitra - Your Friend in Crisis
Can surely add Speech recognition and speech interfacing which can be then used by next level AI assistants like Alexa and Google assistant.
Plan to extend helplines of NGOs, food shelters and hospitals for all the cities.
Integrate some location API which will make responses to location queries easier. (like google maps search)
Extend the COVID API to get statistics of all cities (at least major cities) of India.
Provide personalized mood based proper music and Yoga playlists integrated in the chatbot so that they don't have to hover to some other app or webpage.
Built With
flask
heroku
indiacovid19-api
python
wit.ai
Try it out
mitra-web-app.herokuapp.com
www.facebook.com
m.me
github.com
github.com |
10,007 | https://devpost.com/software/ai-voice-assistant | Inspiration
The inspiration was always to learn new technology.
What it does
It is a voice assistant powered by wit.ai. The users can use a voice interface to ask the assistant very basic things like time and date to information about anything, play songs, personalized jokes, browser automation, COVID related data etc
How I built it
The main languages I used was javaScript and python. The UI was done in electron js and the backend with wit.ai integration was done with python3. I used a library called Eel to connect my python code to js .
Challenges I ran into
The challenges I ran into was lack of time, so I could not add all the features that Il wished to include.
Accomplishments that I'm proud of
In spite of the lack of time and resources, I managed to complete the project
What I learned
I got to learn wit.ai a very powerful tool for NLP related projects.
What's next for AI-Voice assistant
The next features that I like to include are complete home automation using my assistant
Built With
eel
electron
javascript
python
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/preventingcovid19 | Greeting & getting latest news on Covid
News based on countries & number of cases
Other News & thank you + feedback survey
PreventCovid19Together
Inspiration
The inspiration for this project comes from the pandemic (Covid19) that we are facing right now. Every day, there are tons of cases on suicide and offensive incidents that have resulted in unhealthy lifestyles. The main cause of this - Loneliness. As such, my application aims to help connect the people to the outside world despite under lockdown and get to have a little chat with this little assistant. Because a small friend can help a person greatly.
What it does
It is a simple application that retrieves user queries that are related to Covid19. It acts as a friendly companion for the users and provides them with information that they want to know. Imagine having to google search everything related to the virus. This will open up many tabs, take up more ram and time, more webpages will be opened, and users have to keep moving around and typing again. With this application, all their questions will be answered within a simple chatbot.
How I built it
I built it by first utilizing Wit.ai to specify the intent and entities I need. I researched on the suitable deployment spaces and found Heroku to be a good choice. I used Git CLI to push changes and implement versions. Originally, I needed an online server to test my application, hence I used Ngrok. I managed to echo out my own messages using Facebook Messenger. When my application is working, I proceeded to migrate the entire application database to Heroku. This is when I start to output response(s) which I want the application to respond to the users.
Challenges I ran into
Finding a suitable deployment space for my work
Heroku ModuleNotFound error: No such module called 'Wit'
Server Issues
Accomplishments that I'm proud of
This is the first project I have ever done with Wit.ai and Heroku. I am really proud of being able to echo my messages out and being able to provide responses to the queries that have been made by the users. I also managed to use Git and command prompt to push the updates as well.
What I learned
I really like to thank you Facebook AI for hosting this competition. I learnt that finding a good problem to solve takes time and developing the application takes even longer. I also learnt to research on the issues independently. Wit.ai is a really useful tool that I have learnt and I will definitely utilize it in the future.
What's next for PreventingCovid19Together
There will definitely be more coming! :) I will work on retrieving the links for the responses such as the links and high-level output that will answer most of the user queries without them leaving the website. Right now, what I did was only providing output without any real answers to their questions. I hope to be able to work on that and make it a viable solution.
Built With
git
gunicorn
heroku
ngrok
python
sublime-text
wit.ai
Try it out
www.facebook.com
github.com |
10,007 | https://devpost.com/software/teleport-reinventing-travel-for-the-new-world | Scuba diving in VR
At the top of the Everest!
Viewing animals in VR
Getting started
Inspiration
Travel was entirely disrupted by the current Covid-19 pandemic.
In the post-pandemic world, people will likely to travel less, or travel to closer destinations.
But what if you go anywhere in the world, from the safety and comfort of your home?
That's exactly what Teleport is for.
Teleport allows you to instantly travel to any place in the world using virtual reality.
It is so immersive that you will think you are actually there.
You can hear the sounds and view the experience as if you were there and even invite friends to go with you.
And if you decide to experience the view in the real world, you can simply book a trip from within the VR app.
So my question is: where do you want to get Teleported to?
What it does
Teleport is an interactive virtual reality experience. You travel around the world by simple saying voice commands and choosing your next destination or experiences.
The app understands what you are saying by using Wit.AI speech recognition and its advanced NLP features.
Here is a list of the supported commands:
Travel around the world
Take me to Rome
Show me next attraction
Tell me about this experience
Show me a famous painting
Experience new adventures
Take me to the Everest
Show me the wildlife
I want to scuba diving
Social features
Invite my friend
friend name
End call
Booking a trip
Book a trip to Australia in August
Choose the first option
Confirm payment
End booking
Control the VR experience
Pause
Resume
How I built it
I have used VideoJS, ThreeJS, VideoJS-VR, and the Web Device Orientation API to build the virtual reality experience.
The Web Audio Analyser API was used to detect when the user started and stopped speaking.
Finally, the Wit.AI speech recognition and its NLP features were used to detect the user intents.
Challenges I ran into
Detecting when the user started and stopped speaking was one of the main challenges. It was necessary because otherwise I wouldn't be able to correctly identify the voice commands, as the Wit.AI API expects an audio file with 10 seconds in length.
I have tried different approaches, and the final solution was to use an always-on volume change detector using the Audio Analyse API that triggered when the volume raised and down to mark when the user is speaking or not.
Accomplishments that I'm proud of
I'm proud of the immersive experience and the simple user interface I was able to create by mixing VR + Wit.AI speech recognition and NLP features. The final product is simple to use, yet powerful, immersive and engaging.
What I learned
It was a great learning opportunity to learn how to:
Detect user intents from audio using Wit.AI
Detect user intents from text using Wit.AI
Use 360º videos formats and viewing
View virtual reality videos on the web
Use Web Audio API to record users audio
Use Web Analyse API to detect changes in users microphone volume level
Use Web Device orientation to read users device orientation
What's next for Teleport - reinventing travel for the new world
There are many exciting things to do in the next version:
Add more places and videos
Add more experiences
Add search features
Allow multi-users call
Take a picture and share it with your friends
Credits
Rome video -
https://www.youtube.com/watch?v=1ziMH_lAUW0
Sistine Chapel -
https://www.youtube.com/watch?v=7jHygRhvHss
Everest -
https://www.youtube.com/watch?v=7g2k0eEQUaM
Scuba Diving -
https://www.youtube.com/watch?v=mG-A_Tj23B4
World -
https://www.youtube.com/watch?v=dwHBpykTloY
Icon -
https://www.flaticon.com/free-icon/travel_2798100
Built With
360-video
speech-recognition
vr
wit.ai
Try it out
teleport.guru
github.com |
10,007 | https://devpost.com/software/dilemmabot-a-bot-that-can-help-you-with-your-dilemmas | What you can achieve with the help of the bot
How to make use of the bot
How to submit your question
Why content moderation is useful
How Quick Replies feature is helping in improving both user and developer experience
How OTN is crucial for the app
How custom Wit.ai model helps in making the interactions with the bot more human like
How NotifyBot persona is useful for sending notifications without confusing the users
How edge cases are handled
Credibility Calculation function
Inspiration
I always wanted to build an app that could
help people make good decisions
because so many times we see(around us and in our lives) that it is a common problem to get stuck in dilemma about with option to choose in a given situation. A possible solution is to create a polling app that helps people create polls for others to participate in. This way people can get the idea of which option to go for from the poll results.
But here's the catch! Do polls really reflect honest, unbiased and un-opinionated results?
Clearly polls are good for questions where we want others' individual opinions, for e.g., when we want to how many people prefer a certain product and how many prefer the other one. But when we want help in deciding for ourselves and need a single best choice, we can't rely on these polls because they can be biased, and can reflect individual opinions and beliefs. Some people might just act mischievously and select the option that clearly appears as bad. These create noise in the outcome of the poll. Hence to bring, maybe not perfect, but a good surety that the polls actually represent the wisdom of the crowd as a whole and noise elements are curbed,
I mixed the concept of polling app with the concept of credibility scores to incentivize the users to think more rationally while answering in the poll instead of voting based on personal biases or mischievous behavior.
I believe this will help building a community that will in turn help each other solve several common problems in life. That's my goal.
What it does
The bot introduces a new way to interact with each other anonymously on Messenger by creating and answering polls.
It lets a person create polls where the user posts a question(using text and optionally images) along with some choices(max 5) for others to vote from. Others can participate in the poll and vote for the option they feel, more feasible and better for the situation. Finally, after the poll is ended, the person who created the poll gets the analysis in the form of a chart. The participants who opted in for getting notified about the results also receive the answer(not complete analysis). All participants will have their credibility scores(explained in next segment) updated and can ask the bot about it(once who opted in to be notified will be notified about the change explicitly, others will have ask the bot themselves to see if any change has come). A user can have only 1 ongoing poll.
The bot uses a smart credibility score calculation algorithm to bring good surety that the extreme cases in a poll don't leave much impact on the final outcome. This means that those who vote in favor of more extreme option(by extreme I mean, one that really appears as bad solution) have lesser impact on the poll result.
It needs atleast 2 users to test the above functionalities.
How I built it
Techstack for this project includes Typescript(Node.js) with Express.js framework, Ejs template engine and PostgresSql Database. I am using a custom trained model from Wit.ai for NLP and Cloud Vision API for image content moderation. For chart, I am making use of Chart.js library.
I divided the entire app into 2 major flows -
asking question/creating poll and answering question/participating in poll
. Apart from these 2 flows, there are other minor interactions as well such as Getting Started interaction, Fetching Credibility Scores interaction, Ending an ongoing poll interaction and Greeting interaction. For each interaction, I have implemented a separate function. The 2 major flows are further divided into states. The user while interacting in the flow transitions from one state to another. This states are handled in separate functions as well.
When user wants to start a new poll, his/her PSID gets stored in a Set data structure(instead of Array of better search time performance) and his/her question gets stored in a Map data structure(instead of simple objects for better search performance). Similarly the options also get stored in separate Map list. Finally, after all the inputs are made, the poll object(containing question and options as well as some extra variables for storing data like PSIDs of users who have voted in this poll) is created and stored on a queue. Now if a user asks for a poll, then a poll which this user has not participated in yet, is fetched.
The following pic describes the Credibility Scores calculation function -
The range of (0, 100) is only for showing credibility scores to users in a decimal point free manner. The actual credibility scores that the database stores for the users is in range (0.5, 1.5) and calculation of points for options use this range only. For e.g., the maximum increase in points for an option due to a single user can be 1.5(slightly less than it) and minimum can be 0.5(slightly more than it).
I also maintain a DB containing data of which state user is in and to store things like if current user has asked question previously, has answered polls previously,etc so that for users who are completely new, I can present long descriptive messages in every flow to explain things, and for those who have carried out the respective actions before, present short messages. This was to improve user experience.
PS: I made some last minute changes to the bot hence there is very slight difference between how analysis chart is labeled in the video and in the real bot. Also now one can post questions using both texts(atleast 80 alphabetic characters) and images(atmost 2). Lastly the video is slightly fast paced to reduce its length.
How custom Wit.ai model helped in making interactions more natural
Since this is a chatbot experience, hence the only way for users to use its features is through messages. Hence it becomes crucial that these interactions through messages feel natural to the user. The user should be able to talk to the bot without having to structure his/her message in a particular format to get things done. This is where, Wit.ai model plays crucial role. I have made use of custom Wit.ai's NLP model to understand what user wants to do. When that is known, a check is made that which state is the user in, when the message came. This helps the bot knows the context of the interaction with user and what role does his/her current message plays. If the user is in a flow and sends the message to exit it, then the bot understands that the user is in a particular state and wants to exit the state and go back to state 0. Hence appropriate functions are called for that. Similarly when user sends some text which does not correspond to any available commands and the user is currently in the asking question flow, the bot understands that the text is a part of the question user wants to ask, and hence acts accordingly. This way the bot is able to provide natural interaction experience to the users.
How Quick Replies feature helped in improving both developer and user experience
For ensuring that while participating in a poll, the user is able to vote for only one option and is able to do it conveniently without having to write entire option, I am making use of Quick Replies feature of messenger. This greatly improves user experience and eases my job to verify their input, because otherwise I would have to make several checks, since user might not type the entire answer word to word with proper punctuation marks.
How One Time Notification feature is crucial for the app
A user might participate in several polls one after the anther but may not want to get notified of every poll's outcome, hence I have added the interaction to the flow of answering question where the persona bot(I have mentioned in the Challenges section why I used persona) asks if the user wants to get notified. It is very likely that the creator of the poll might leave the poll open for a day or few, hence to ensure that the participants who opted in to get notified actually get notified, I am making use of One Time Notification feature of messenger. This way OTN feature plays a crucial role in the app.
How content moderation is carried out
Since the content that a user posts has the potential to reach a wide audience, it is very important to keep checks on the type of content user is posting.
For checking questions and the options submitted for any swear words, I am using a
Node.js library: bad-words
.
For keeping check on the images that user posts for any nsfw or adult content, I am using
Google Cloud Vision API
.
Lastly, for checking that the image user has posted doesn't contain any offensive text, I am using
Vision API
for text detection and then
bad-words
library for detecting swear words in the text.
Challenges I ran into
The biggest challenge was to create the flows and state transitions within them. Since there can multiple users interacting simultaneously, it was a huge challenge to maintain states properly so that all the requests could be handled properly. Organizing the code properly so that it could be extended later was very important. The state transitions of the users was tough to handle since there can be more than one state that user can land to from his/her current state. So this was the part where I spent most of the time thinking how to write the flows and state transitions.
A major challenge was training custom model in Wit.ai for my bot so that it can distinguish between user commands and user input(questions and choices) flawlessly. Even while in being in the flow of entering question, there is a possibility that user might want to end flow, so I had to ensure that the model was trained enough distinguish between a question and command to end the flow or command to do something else so that the bot can act accordingly. For example if the user enters command to participate in new poll while he/she is in middle of creating a poll then the model should be able to recognize it so that the bot can tell the user to that he/she needs to finish the current flow before starting another.
There are still some issues with Greetings interaction. Sometimes the bot confuses simple greetings with commands. Eg - 'Hello Bot' is greetings message but 'Hello Bot I would like to start a poll' is not. Hence to avoid confusion, I have trained the bot to recognize the latter statement as command but this has lead to bot not able to recognize greetings sometimes with good confidence. So that's a trade-off I made, to focus more on functionality. But I am working to fix it.
Design of the Credibility calculation algorithm had to be such so that the new credibility scores of the users are not significantly different from their previous values. Also the change for every user should be different based on how far their option was in terms of total points from the winning option. The further it was, greater should be the decrease and vice-versa. This design was challenging to come up with.
I also faced an issue related to user experience of the participants who opt in for getting notified about the poll results since its not predetermined when the poll creator will end the poll. So it might happen that some of these participants are in the middle of another flow, maybe creating their own polls, when creator decides to end his/her poll. This is problematic because the participants might get confused, if they are interrupted suddenly with the notification, about whether they should continue writing their question or restart the flow since its interrupted. One solution is to mention something like 'continue with whatever you are doing' at the end of the notification message. This will cover up both the cases when user is in the middle of the flow and when he/she isn't. But if DilemmaBot mentions this type of line in the message, then it makes conversation unnatural because it should appear as if the bot knows what's happening currently. Hence I created a separate persona NotifyBot whose sole job is to send notifications and tell the user to carry on with whatever he/she was doing.
Accomplishments that I'm proud of
I am very proud that I was able to create a chatbot experience that serves the functionalities of a traditional app using just the messages.
The concept of Credibility scores might not seem very novel since various apps make use of similar concept in the forms of upvotes/downvotes, likes/dislikes, etc. However I was able to come up with something that could be used with respect to polls, and I am proud of this.
Being absolute beginner in Machine Learning concepts, this was the first time I worked with NLP. This was challenging because I had to ensure the model was trained enough to distinguish between user questions and commands and not confuse the two as that would lead to bugs.
I am very happy to be able to build a complete product and not just a prototype. I was able to build the chatbot keeping in mind different ways user can interact with it. I have added
mark_seen
and
typing
signals to improve user experience. Although there might be some cases that I am missing but overall it was a great experience building the logical flow of the Messenger experience.
What I learned
I learnt how to create complex flows for a chatbot experience. The messages that comes from the user do not have the information about the previous conversations hence it was crucial to ensure that the bot keeps track of the conversations and design the state transitions and flows properly so that the interactions with the bot appear smooth instead of artificial. Although the bot replies using pre-programmed texts but how and where the bot uses these texts really makes the conversation very natural. For me building this type of experience was new.
I learnt to organize code(although I am still not very good at it). I had to write the appropriate functions for handling different states and commands, and it had to be in such a way that I could debug the code and extend it further without much trouble. This was challenging but somehow things worked out.
What's next for DilemmaBot
I tried to integrate sentiment analysis feature of Wit.ai to classify questions into 2 categories - Light Hearted(positive and neutral) and Serious(negative). However due to lack of expertise and data, I could not train it the way I wanted. The issue was that if a certain text had words like stress in it, then it would get classified as negative even though the issue might not be serious at all. We can't ask the user to tag the question himself/herself as he/she might not give correct tags. Hence I would like to collaborate with an expert in this field and add this feature. This feature would help separate serious issues from others so that they could be given better attention.
Right now the bot does not work with videos, however its likely that the users may want to use videos while creating polls, so I will work on adding support for this media element to the bot.
The concept of Credibility Scores does look promising but there are possibilities to improve it. So based on users' feedback I will work on further improving the concept so that I can offer better decision making bot.
As mentioned in the Challenges section, I still face some issues with the Greetings interaction. So I am working to fix that.
I have tried to cover many edge cases but still bugs can come up and also there can be more edge cases left to be covered, so I will work on solving those bugs and edge cases.
Credits and Attributions
Apoorva M K's ReWise bot was helpful in understanding the flow of the chatbot bot interactions.
Bell icon and Question mark icon for bot thumbnails were taken from
Flaticon.com made by Freepik
Used IPhone X mockup from
Mockuphone.
Built With
express.js
figma
google-cloud-vision-api
node.js
postgresql
typescript
wit.ai
Try it out
www.facebook.com
github.com |
10,007 | https://devpost.com/software/detectnow-bot | Technical Architecture
Inspiration
DetectBot
messenger
is a landline and messenger bot which uses Artificial Intelligence to detect COVID-19 from cough sound recordings. Our vision is that every coughing person can get tested for COVID-19 at zero costs directly from home. It can be also accessed from a landline without requiring the internet.
In order to help vulnerable societies from all over the world. Since the internet and smartphone penetration is quite low in rural areas and third world countries, we provide easier access to the testing of COVID-19 simply by the recording of your cough at zero cost from a landline in addition to the messenger. We ask users to record 10 seconds of cough alongside some additional optional demographic and medical questions to make accurate predictions about the diagnosis.
What it does
DetectBot
is a chatbot built with
wit.ai
to guide our users step by step for easier data collection rather than visiting a web form. We provide clear and necessary instructions during the usage of the bot to make sure the safety of the user, his/her surrounding, and the devices being used for collection. We also collect all the necessary consent required from the medical and user data privacy compliance rules (GPDR, HIPAA) to make sure that the data of the user stays private, safe, and anonymized.
DetectBot follows Differential privacy approaches. Differential privacy is a system for publicly sharing information of a dataset by describing the patterns of groups within it while withholding information about individuals.
We collect biological data (e.g. age, weight, size, gender) and medical data (e.g. temperature, breathing rate) which we use to run machine learning models to detect bio-signals of the COVID19 cough.
Landline services can be accessible from the following numbers,
+44 20 7365 7186 (Europe)
+1 844-230-5884 (US)
+41 800 110 318 (CH)
Or from the DetectNow page,
https://www.facebook.com/DetectNow-109463114150068/
DISCLAIMER:
The diagnostics function is not live yet, as this will require a medical trial first.
How we built it
We built the chat-bot using
wit.ai
to enable people to interact with DetectNow services using voice and text. For backend development, we have used nodeJS for logic handling of the bot framework and used Facebook messenger integration with that node JS application for the session, dialogue, and model handling.
We've used AWS Polly, along with AWS LEX to provide integration of our bot over the telephone helpline in Europe and the USA. We used AWS lambda function for the integration of wit.ai with the AWS services via API communication. We collected 500 crowd-sourced data points from the #codevscovid19 hackathon on which we trained the AI model for the COVID19 classification. The AI model and backend infrastructure is explained in the below diagram,
Team
We are a group of machine learning experts, doctors, and entrepreneurs from Switzerland, Egypt, Germany, China, Ukraine, India, Pakistan, Greece, and Spain. We initially found together through a Slack group during the #codevscovid19 hackathon and are working completely remotely and expanded with new team members to make it production-ready. During the Facebook AI hackathon, we also enabled the support for messenger and landline channel on top of the
web platform
Challenges we ran into
We heard about the competition two weeks before it's the end of the submission, and therefore we worked very hard as a team to finish the prototype before the deadline.
Getting started for wit.ai was challenging for us as this was all new for our team members and there are very few online resources to get started with wit framework and integrating it with other services like AWS etc.
Accomplishments that we're proud of
Working on a common mission with a team of machine learning engineers, doctors, and entrepreneurs.
We started developing the AI as part of HackZurich, we managed to collect crowd-sourced data with a web application, on which we further trained our models, and to give further easier access we also enabled voice and messenger bot channel.
Having built a functional prototype in just three days with functionalities like bot interface, connecting it with AWS services, and with a landline.
Learning wit.ai in a short amount of time, we started working on it for almost two weeks before the deadline and we're quite happy with the progress we've made so far.
What we learned
Our team members had experience with building chatbots using Lex, DialogFlow, Azure bot Framework, and Rasa. But this is our first time using
wit.ai
and it was challenging and time well spent! Besides, integration of AWS services especially Connect, Polly, Lambda, WIT, Node, it was challenging and rewarding to orchestrate a solution that works using all the components.
What's next for DetectBot
We will add further features in the realm of telemedicine, scheduling appointments, and tracking disease spreading. Enabling:
easier & earlier diagnosis of respiratory diseases like Asthma, Bronchitis, Pertussis, etc.
a clinical trial to get it approved as a medical device
Built With
aws-lambda
aws-polly
facebook-messenger
flask
librosa
node.js
python
tensorflow
wit.ai
Try it out
www.facebook.com
github.com |
10,007 | https://devpost.com/software/wrangle-and-analyze-data | It is clinically diagonising the incident cases how furiously the rate is increased.
Wrangle-and-Analyze-Data
Udacity Data Analyst December 2017 - May 2018.
Project 7: Wrangle and Analyze Data - WeRateDogs twitter account
Project Overview
Introduction
The dataset that I will be wrangling (and analyzing and visualizing) is the tweet archive of Twitter user @dog_rates, also known as WeRateDogs. WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. These ratings almost always have a denominator of 10. The numerators, though? Almost always greater than 10. 11/10, 12/10, 13/10, etc. Why? Because "they're good dogs Brent." WeRateDogs has over 6 million followers and has received international media coverage.
WeRateDogs downloaded their Twitter archive and sent it to Udacity via email exclusively to use in this project. This archive contains basic tweet data (tweet ID, timestamp, text, etc.) for all 5000+ of their tweets as they stood on August 1, 2017.
What Software Do I Need?
You need to be able to work in a Jupyter Notebook on your computer. P
The following packages (libraries) need to be installed. You can install these packages via conda or pip. Please revisit our Anaconda tutorial earlier in the Nanodegree program for package installation instructions.
pandas
NumPy
requests
tweepy
json
You need to be able to create written documents that contain images and you need to be able to export these documents as PDF files.
Project Specifications
Code Functionality and Readability
All project code is contained in a Jupyter Notebook named wrangle_act.ipynb and runs without errors.
The Jupyter Notebook has an intuitive, easy-to-follow logical structure. The code uses comments effectively and is interspersed with Jupyter Notebook Markdown cells. The steps of the data wrangling process (i.e. gather, assess, and clean) are clearly identified with comments or Markdown cells, as well.
Gathering Data
Data is successfully gathered:
From at least the three (3) different sources on the Project Details page.
In at least the three (3) different file formats on the Project Details page.
Each piece of data is imported into a separate pandas DataFrame at first.
Assessing Data
Two types of assessment are used:
Visual assessment: each piece of gathered data is displayed in the Jupyter Notebook for visual assessment purposes. Once displayed, data can additionally be assessed in an external application (e.g. Excel, text editor).
Programmatic assessment: pandas' functions and/or methods are used to assess the data.
At least eight (8) data quality issues and two (2) tidiness issues are detected, and include the issues to clean to satisfy the Project Motivation. Each issue is documented in one to a few sentences each.
Cleaning Data
The define, code, and test steps of the cleaning process are clearly documented.
Copies of the original pieces of data are made prior to cleaning.
All issues identified in the assess phase are successfully cleaned using Python and pandas.
A tidy master dataset with all pieces of gathered data is created.
Storing and Acting on Wrangled Data
Save master dataset to a CSV file.
The master dataset is analyzed using pandas in the Jupyter Notebook and at least three (3) separate insights are produced.
At least one (1) labeled visualization is produced in the Jupyter Notebook using Python’s plotting libraries.
Report
Two reports:
Wwrangling efforts are briefly described in wrangle_report.
The three (3) or more insights the student found are communicated in act_report.pdf including visualization.
Built With
jupyter-notebook
Try it out
github.com |
10,007 | https://devpost.com/software/spot-it-get-faster-exam-prep | Inspiration
Raise your hands if you have scrambled through your lecture notes/videos at the last moment before an exam. Yes! All of us have done that!
No matter how early we start preparing for the exams, last-minute preparations always help. Browsing through Piazza posts, lecture slides, and videos to find a concept during the last minute is time-consuming and stressful. We always felt we could save an ample amount of time if someone could just point us directly to the resources we need to clarify our doubts. This is where the idea for Spot.It was born.
What it does
Spot.It gives us a very simple and intuitive interface to search through the lecture videos to find exact points in the videos where the professor explains the topic, gets you the top results from the internet and a curated list of short (< 5 mins long) videos from Youtube, all in one place and with just one search. So, we won't have to spend time searching through our lectures or online references to clarify our doubts.
How I built it
We used Angular 9 and NestJs (both typescript based frameworks) to build frontend and backend of the application. We also used Facebook's
Wit.ai
to perform Natural Language Understanding on user's query and used the results from the intent processor to perform queries on the data in our system
Challenges I ran into
The main challenge was to fetch the right results that save student's time while preparing for the exam. There were a lot of great experiences during the roadmap of the product.
We needed a reliable and highly accurate source of search results from the internet and there aren't any free API for that. Scrapping Google would have end up getting the backend server's IP blocked. We tried Duckduckgo and Rapid API as well, but either the APIs weren't free or the results weren't as accurate as needed. Thanks to Microsoft, we ended up fetching results from Bing!
Transcribing Youtube videos was another good challenge. Started with a python script, worked well in the start till getting stuck at Youtube's bot captcha. We tried a third party API which resulted in getting IP blocked (We wonder if they scrapped results :P). Finally, we settled with another npm package that uses a legitimate way to transcribe Youtube.
You can't just search of the Youtube transcriptions for the user's query as videos are of varying duration with multiple occurrences of user's query, resulting in getting numerous results of annotations timestamps which were only a few seconds away. We added functionalities which help the system decide whether the user should watch the full video in case of shorter videos, and also to get only the most confident spots in case of longer videos, instead of sending all the possible spots.
Last but not least, getting time out for the project while working a full-time job!!
Accomplishments that I'm proud of
It works! This project is a proof of concept for what we envisioned about software doing such kind of tasks. We are pretty sure all the students out there would find this helpful.
Angular changed a lot since we last worked on it so it was a great brush up on that.
Last but not least,
Wit.ai
is awesome Its simple, intuitive, and no API calls limit. The best part the interface to train for the utterances. Very cool
What I learned
Sometimes things don't go the way you expect them to. The second Youtube transcription approach started failing just a few days before submission
Wit.ai
is a great tool for building chatbot like experiences. We weren't aware of it until we found about this hackathon. Mostly worked with Google's Dialogflow.
What's next for Spot.It - Get Faster Exam Prep
Alot ! This version of Spot.it is just a proof of concept running on free servers and using some free resources available online. Future roadmap includes adding the capability of searching through lecture slides, connecting with student discussion platforms like Piazza or Facebook Groups to get more curated and accurate results to make last-minute preparations even faster.
Adding support for uploading videos from other platforms.
Adding support for user signup
Adding support for voice-based search using speech to text
Last but not least, getting better infrastructure resources to scale-out
Built With
angular-material
angular.js
nestjs
node.js
typescript |
10,007 | https://devpost.com/software/lisa-rlcyjx | Inspiration
I thought it would be cool if Facebook had a voice assistant because I love their products and it seemed logical just to build one on my own!
What it does
Read Facebook posts, greetings, and tell jokes :)
How I built it
I used the Facebook Core SDK, Facebook Login SDK, Swift, and Wit.ai
Challenges I ran into
I had to work on this project at the same time as doing my internship so it was hard to find time.
Accomplishments that I'm proud of
This was my first time building an app with Swift and I thought it was a good learning experience. I was also able to program this app by myself while doing my internship!
What I learned
I learned that Facebook's SDKs are very developer-friendly and I hope to use them again soon
What's next for Lisa
Improving the UI
Built With
av-foundation
facebook-graph
facebook-login-api
swift
uikit
wit.ai
Try it out
github.com |
10,007 | https://devpost.com/software/localize-4sfajy | Self-service
Be a volunteer
Need for volunteers
Selection of timeslots for need for volunteers
Modified One Time Notification to Notify people who need volunteers that their order has been accepted
Carousel for shops in self-service
Booking time slots for booking slots in self-service
An example of QR code generated.
Use of Quick Replies for faster movement
Inspiration
The geography of the whole has changed due to
COVID-19
. This has affected people from all stretches of life. Localize was inspired by a real-life incident which happened in the month of May. In India, due to such a large population, the government had to impose strict lockdown across all the states. This led to a major panic everywhere. The major issue was the availability of day to day items and how will people buy them from shops given the conditions of
social distancing
.
The social-distancing rules were as follows-
No travelling either by cars, bikes, trains or flights.
The stores were also not allowed to deliver items and neither were the third party professional delivery services allowed. This was done because, in one of the cases, the delivery person of a famous pizza serving food-chain was infected with COVID-19 in Delhi (
source
).
6 feet
distancing is mandatory in public.
Curfew after 7 PM.
Shops can only open from 9 AM to 6:30 PM.
The shops came out with a plan which is as follows -
People will stand in circles marked on the ground, outside the shop. These circles will be 6 feet apart.
This solution had multiple issues. They are as follows-
Only 2 people were allowed at a time in the shop for shopping.
People still did shopping in a leisurely manner. In some cases, when I went for buying groceries, people in front of the line would shop for
45 minutes
and came out with just a few vegetables.
The temperature in India reaches more than
104°F
or
40°C
. People have to stand in line under the sun for an unknown amount of time.
People who get tired, usually sit near a tree with very less space between them (less than 6 feet)
violating the social distancing rule
.
Police brutality
- There have been many
cases
, where police went to extreme measures and beat up people who were just going to buy groceries. Some more sources are as follows-
Cop suspended for beating women at ration shop in Noida
Indian man shopping for milk under curfew dies after alleged police beating
India Doesn't Have a System To Make Sure People Are Out for the Right Reasons During the Lockdown
Delivery personnel beaten up by police as people struggle for essential goods
And many more...
The people who are most affected by this situation are-
Elderly people - In the local news, a
75-year-old
man had to stand under the sun, waiting for his turn to buy groceries for
2
days. On the
3rd
day, he fainted and had to be rushed to a hospital.
People with disability
People who were victims of the above-mentioned police brutality.
I went through the stats and as per the
source
, India has more than
280 million Facebook Messenger
users. This can be leveraged to create a
proper
and
reliable
system that eliminates this
chaos
.
Requirement Analysis
We need a
no contact
based system to remove this chaos. We are going for
QR code
based access.
Talked to a few managers and owners of our local shops and grocery stores. They told that people usually shop for
15 to 20 minutes
. Apart from that, they just go through
offers
and look for anything
new
.
Also, people usually come again to the shop after
3-7 day
after buying groceries.
We need something with a low learning curve which can be taught to the users very easily.
To keep the grocery shopping local to prevent travelling.
To prevent police brutality by having a system which is can be shown to them saying "I am going to buy groceries."
What it does
Localize
is divided into
3
parts. They are as follows -
Self-service
- Here the user can select the local stores in his area and select a timeslot accordingly. The path is as follows-
Enter your name -> Enter your approximate address -> Select the shops -> Select a time slot -> Download the QR Code -> Receive a token which is a backup for QR Code -> Visit that shop at the chosen timeslot -> Get the QR Code scanned -> Shop -> Bill -> Leave
Here, we have used the QR Code for no contact system. The time slots are from 9 AM to 6:30 PM divided into
15 minutes
slots. This allows
38
customers to shop in a day at a single shop/store. It has reduced the chaos by a very large extent and when asked by the police officials, a person can show the proof.
Need for volunteers
- Here the user can ask for help from nearby volunteers. This option is very helpful for people who can not physically go to the store for legit reasons like
old age
,
people with disabilities
or for
people with special needs
. The path is as follows-
Self-certification for COVID-19 -> Enter your name -> Enter your approximate address -> Enter items that you need (example - I need apples, oranges, flour and carrots) -> Select a time slot -> Click on "Notify Me" -> Wait for volunteer to accept order
Here, we have asked the people who need volunteers to self-certify that people have shown no COVID-19 symptoms in the past
14 days
and to be polite to the volunteers and treat them with respect. Here, the time slots have been increased to
1 hour
slots to prevent any pressure on the
volunteers
and to provide a smooth experience.
Also, the "Notify me" feature was added so that the people who need volunteers can contact with the volunteers and discuss the issue.
Be a volunteer
- Here the user can choose to help people in need by being a volunteer. The deliveries needed are shown in a sorted manner according to the distance. The path is as follows-
Self-certification for COVID-19 -> Enter your name -> Enter your phone number -> Enter your approximate address -> See delivery list along with time slots, items needed, name, address and distance -> Accept a delivery
NOTE
- The
wit.ai
model has been trained on Indian phone numbers only. You can try a test number as shown in video.
After accepting the delivery, the user whos delivery got accepted will receive a notification to contact the volunteer.
NOTE
- The volunteer will then go to
self-service
option and follow those steps.
You can not only go for groceries, but also for medicines and all other shops nearby.
You can also say
hello
to restart the bot. Apart from that, everything works on quick replies to make the process
easier
and
faster
.
How I built it
This is my first try at creating a Facebook messenger chatbot. I tried to make it as simple and useful as possible with an idea that can be highly impactful at such times. The tech-stack and implementation choices are as follows-
For hosting, we made use of
Heroku
.
Node.js
helped in fast development.
Draw.io
was used to visualize the different paths.
Google's Places API
was used for suggesting nearby places and calculating approximate distance.
Google's Geocoding and Reverse Geocoding API
was used to convert the addresses to calculate distance and nearby places.
MongoDB
as a database.
One Time Notification
was used.
Quick reply
was used.
wit.ai
for capturing names, addresses, items and phone numbers.
Webview
was used to display large contents like
deliveries
and
timeslots
.
After this, we had to set up the logic of the bot.
First of all, I drew the path and performed requirement analysis for this issue. This helped me to divide the service into 3 parts -
Self-service
,
Need for volunteers
and
Be a volunteer
.
The issue of creating a separate database for timeslots. This is accessed by the self-service database.
Training and validating
wit.ai
for handling addresses. It required a lot of patience as some Indian names were not being processed accurately along with some locations. It had to be trained for items and phone numbers too. In some cases, it was taking
items
as
names
.
Challenges I ran into
Some of the challenges that I faced are mostly related to approval issues. Due to COVID-19, the approval is slow and hence I had to roll the bot out in
Development Mode
only and I have given access to
499418056 and stef.devpost.1
.
Modifying the One Time Notification issue to Notify the user when their delivery has been accepted. The Facebook Messenger Documentation helped a lot.
Displaying of large content on messenger was not feasible at all. So we had to experiment and finally decided to go with webview.
The most important part is to
reset
the timeslots at
8 PM (IST)
so that people can select shops again. Solved it by keeping a timer.
Accomplishments that I'm proud of
I was able to understand the issue and due to such a scalable and impactful platform like Facebook messenger, I was able to implement it.
The future prospects of this bot and how it will help the
people
and the
business
excites me.
What I learned
I learnt a lot of new things since it is my first try at messenger bot. I learnt about various things ranging from working with different APIs which are provided by Facebook, training wit.ai, experimenting with the
Graph API
, creating a fully functional messenger bot and tinkering with the APIs.
What's next for Localize
Talk and showcasing the bot to the managers and owners of local establishments.
Talk with the local police force and discuss the acknowledgement of the project.
Rolling the bot out publicly for usage.
Spreading awareness about the bot so that people start to use it.
Adding
private reply
feature after acceptance.
Train in on multiple models using
wit.ai
. For example, as of now, it is trained on Indian mobile phone numbers.
Built With
facebook-messenger
google-maps
graph-api
heroku
javascript
node.js
webview
wit.ai
Try it out
github.com
m.me |