Multimodal emotion analysis, interactive chatbot and area-based agricultural subsidies

If you cannot entirely comprehend how these topics are related to each other, you are right, they are not. Simply these were the three mini projects of our latest Hackathon at Precognox.

The Hackathon had been organized to give our collegues a new experience, just to get out of the rut for a day and to learn finding alternative solutions on particular problem types. We were working in teams on three very exciting issues, on a kind of mini projects, each one required a number of different experts from various fields (developers for the frontend and backend to-dos, QAs, NLP experts). The atmosphere was more supportive than competitive, especially that our projects were so different.


The mini projects:

• Multimodal emotion analysis of video material, including text and voice
• Building an interactive chatbot app that helps the users to buy a present, posing and answering relevant questions
• Cleansing and visualizing data in the database of the distributed area-based agricultural subsidies from the period between 2010-2015 with the help of Google Maps API

For the multimodal emotion analysis project we needed 2 minute short videos in Hungarian and English with easy-to-parse textual, audio and visual information. Each type of data analysis requi

red a different tool and we chose APIs in general. The identified emotions were added onto the videos as annotations: the female speakers’ emotions popped up in a pink and the males’ in a blue bubble on the screen. For the processing of visual emotions we were using another API. Having the results we’ve just collected in our pocket we were examining the interconnectedness of the three analysed parts. We assumed that the combination of the three multimodal analitic tools effectively complements each-other’s shortcomings and together they can display emotions which would be lost in a type of analysis that finds merely one factor. In our future plans we consider to find out what may the multimodal analysis show in cases of tricky phenomena like lying, gloating, irony or scorn.

The chatbot project went well and the team has pieced together a nice app. For first we decided over the categories of gifts the app would recommend, then we set up the structure of the application. A thesaurus has been built up to back the interface for which we chose a dancing, cheeky-eyed little Disney figure we named Eugene. In the meanwhile the frontend and the backend functions were also developped. Having these done the team tested the app by creating a number of alternate user personas, like an old lady wishing to buy a mug to her grandson, or a young lad planning to go out hiking with his friends, needing some equipment.The whole project was very inspiring and fun to make!

In the agricultural project we attempted to visualize the cleaned data that we obtained from the K-Monitor. The goal of the project was only partly achieved because of the scope of the required data, though it still gave us a good amount of insight and precious experience. We chose to limit the amount of analysed aids and addresses under 1000. The result of visualization was spectacularly better than with the original database content. All in all we can say that the project was a success and provided us with a good knowledge base for tackling such issues in the future.

Our teams and the day’s final report:

If you liked the article please share it with others!