Nerd out on the ultimate crowdsourcing science fair project

Posted November 2nd, 2016

This week, the University of Texas' School of Information hosted a conference called Human Computation and Crowdsourcing, or HCOMP for short, that featured some of the brightest minds on artificial intelligence. Some of the conference sponsors included Google and Microsoft.  

On Tuesday, HCOMP held a demo poster session, which was an awful lot like a grown-up science fair project. University researchers displayed findings of various studies related to crowdsourcing. 

We've compiled the 5 most interesting demos from the conference: 

1. Video Summarizer

Nancy HuangShay Sheinfeld next to his poster, "Video Summarization using Crowdsourced Causality Graphs"

Shay Sheinfeld described his algorithm as a video editor, cutter, and director all in one. "Usually people are too impatient to watch entire videos, so our goal is to shorten videos using content skimming searches while still keeping their original plots," Sheinfeld said. The algorithm's output is a "summary" of the entire video, like a trailer-length movie. 

2. Different Methods of Rating

Nancy HuangEngineer Alessandra Checco stands next to his posterboard, "Pairwise, Magnitude, or Stars: What's the Best Way for Crowds to Rate?" 

Mathematical engineer Alessandro Checco said rating things on a scale from 1 to 5 stars is ineffective for rating multiple things in succession.

“When you rate something a 5, and the next one is better than the one before it, you can’t rate anything higher than a 5,” Checco said. “We wanted to test out magnitude ratings, which is assigning any positive number higher than zero, and pairwise ratings, which is choosing between two options.” 

To his surprise, pairwise ratings were the most accurate rating method in his conducted surveys, gathering the most consistent responses out of all three.

3. Reconnecting elders via social tagging

Nancy HuangFrancisco Ibarra next to his team's board, "Tools Enabling Online Communication for Elderly Adults"

Francisco Ibarra said many older adults benefit from having online interactions, but getting them to participate is difficult. 

“Our solution is to take photos from their lives, and ask them to identify people and locations,” Ibarra said, relating the process to geotagging. “Once we have a network of people they are familiar with, we match profiles.” 

Eventually elders will be able to interact with one another from a shared network correspondence.

4. Game Feedback Survey

Nancy HuangDartmouth Professor Mary Flanagan standing next to her team's poster, "Feedback and Timing in a Crowdsourcing Game"

Professor Mary Flanagan and her team were surprised by their gaming survey results. 

"Gamers depend on feedback to know whether or not they're doing the right thing," Flanagan said. "We designed a game where the feedback was only 50 percent reliable. It would only be trustworthy half of the time. We thought that they'd hate it and say it was the worst game ever. Instead of hating it, the survey responders were willing to play it again, and it ranked about the same as games with reliable feedback."

5. Phonetic Language among Mixed Groups

Nancy HuangPurushotam Radadia next to his board at HCOMP 2016

Researcher Purushotam Radadia said depending on what language people speak, their ability to identify other language phonetically varies. "If you have someone who speaks a tonal language, like Chinese, Vietnamese, they will more likely be able to accurately identify Cyrillic tongues, like Russian," Radadia said. His team's project revolved around surveying multilingual workers by having them listen to languages and pick the most accurate phonetic spelling. They were able to compile everything into a context tree.