- Builder Survey Tutorial – Part I
- The New CrowdFlower Product Series: (Re)imagining the Future of Work
- Contributor Profile – The Aspiring Roadie
- Simple Sentiment Analysis
- Recap: Builder Pro Training Day NYC
- Radiology and Crowdsourcing
- Builder Pro Training Day New York – May 22
- Ember.js at CrowdFlower
- Recap: Builder Pro Training Day
- Discovering Drug Side Effects with Crowdsourcing
- Measuring Local Search Relevance for YP.com
- The Wisdom of the Crowd: Oscar Edition (Part 1)
- Skilled Crowds: Identifying talent in the world’s largest crowd
- Who Twitter Got?
- The Accuracy of Apple Maps Listings: Reality Check
Today, we're going to provide a step-by-step tutorial on how to create a job on CrowdFlower Builder. Builder can be used in many different crowdsourcing objectives, such as sentiment analysis, business data enhancement, and categorization. Surveys are also a popular use of Builder and a survey tutorial can serve as a great introduction to the platform's powerful technology. This first post will outline the first two steps in creating a survey on Builder. 1. CREATING A JOB First, click the “Create New Job” button on the jobs dashboard after you have signed in. _Figure 1.0 – Creating a New Job_ Then, navigate to the “Edit” tab, where you will begin creating instructions for the contributors ( survey participants). Be sure to provide a clear and concise title that describes your survey well. Avoid displaying payment amounts, as payment often varies across multiple channels. _Figure 1.1 – Entering Title & Instructions in the Graphical Editor_ 2. ADDING THE QUESTIONNAIRE Now it’s time to add the questions that will make up your survey. These questions will be composed using CML (CrowdFlower Markup Language). You may use the Graphical Editor (As seen in Figure 2.0) or the CML Editor (As seen in Figure 2.1). If you are using the Graphical Editor, click on the box entitled “ADD FORM ELEMENTS” and select the appropriate form elements you wish to use in your survey from the left hand column (Figure 2.0). _Figure 2.0 – Creating a Questionnaire in the Graphical Editor_ You are able to switch between the Graphical Editor (above) and the CML Editor (below) as you like by selecting the blue button on the left side of the interface. Be sure to click “Save” before switching or your progress will be lost. You may also preview the job by clicking on the “Preview” button located to the left of the “Save” button. This preview gives you the ability to see the task as it will appear to the contributors. This is a valuable tool as user experience often plays a significant role in the sourcing of good quality crowd results. _Figure 2.1 – Creating a Questionnaire in the CML Editor_ For this particular type of CML element, we will have to include a value=”” tag with whatever output value we want within the quotations. We recommend referring to the CML Documentation in order to verify you have properly constructed your job before launch, since different elements will require different tags. In this case, we will include the value tags and our end result will look like the CML below:
_Figure 2.2 – Editing CML Code_ _Figure 2.3 – Preview Your Job_ You can easily preview your job by clicking on the preview button or by typing preview after the Job ID and backslash in your URL bar. _In Part II of the Builder Survey Tutorial, we will adjust the settings, launch the job and download our results. Stay tuned…_
Here at CrowdFlower, our Product and Engineering teams are a few months into an ambitious project: building everything we’ve learned about crowdsourcing in the past five years as industry leaders into a new, powerful and intuitive platform. Today, we’re excited to kick off a monthly blog series that gives you an insider pass to our development process. Here, we’ll cover the platform puzzles CrowdFlower wrestles with everyday: * How do we process 4 million human judgments per day with a relatively small engineering team? * Which UX will move crowdsourcing from the hands of early adopters into the hands of every business that requires repetitive, online work? * What does talent management mean in an online crowd of millions? * Can we become an ecosystem for developers who want to build crowdsourcing apps and tools for profit? * Most of all: what's it like to rebuild a platform that carries enormous load… a sort of pit-crewing of the car while it's hurtling around the track, or multi-organ transplant. Our first post will dive into one of our recent projects: the total rewrite of our worker interface. It’s common lore that engaging in a large code-rewrite project is risky at best, and a company-killer at worst. We’ll tell you how we made it through with only a few minor scrapes and bruises, and many happier workers. Since its inception, CrowdFlower has been called the future of work. We owe our workers and our task builders not just a pleasant experience on our platform, but one that awes them at every step of the way with the power of crowdsourcing. We look forward to sharing our efforts with you.
_This is the first in a series of profiles of the real people who do the amazing work on CrowdFlower jobs - B.R._ THE ASPIRING ROADIE NAME: Megan B. FROM: Middleburg, FL USA REWARDS SITE: InstaGC.com DESCRIBE YOUR HOMETOWN: My town is more country than city. I live just down the road from rock stars, and I sometimes pass people on horseback on the way to Walmart. The Walmart is pretty much the highlight of town, but you're almost always guaranteed to run into someone you know while you're out running errands. WHAT MADE YOU INTERESTED IN MICROTASKING? I guess I never really knew about these types of things until I signed up for some online earning websites. I found it quite interesting to see how much work goes into things like verifying business phone number directories, or gathering web page information, or creating in-text Wikipedia links. These are things I’ve always made use of, but never fully appreciated how much work went into creating them, so I like being a part of the behind-the-scenes group of people who help make this stuff possible. WHAT'S THE SINGLE CROWDFLOWER TASK THAT YOU'VE ENJOYED THE MOST? AND WHY? My favorite CrowdFlower task is the one in which I'm given batches of words and I identify as many Wikipedia pages as I can that correspond with those given terms. HOW DID YOU FIND CROWDFLOWER AND HOW DO YOU LIKE WORKING WITH US? I found CrowdFlower through InstaGC.com. I really enjoy working with CrowdFlower. I never have any problems with getting credit for tasks, like with a lot of other survey and task companies. I enjoy the work and I feel that the payment is very fair for the amount of work involved. WHAT ADVICE WOULD YOU GIVE OTHER PEOPLE INTERESTED IN WORKING ON MICROTASKS? I would tell that person to try out a little of everything. There are plenty of different tasks available to do, and once you find some you really enjoy doing, it will hardly even feel like you're doing work. WHAT DO YOU DO FOR FUN? _(when You're Not Microtasking, Of Course)_ My life pretty much revolves around road trips and concerts. I like picking different parts of the country to go see concerts in, instead of just going to shows around the state where I live. I get a chance to experience new places and meet a lot of new people that way, as well as seeing a lot of this country. IF YOU COULD TRAVEL ANYWHERE IN THE WORLD RIGHT NOW, WHERE WOULD YOU GO AND WHY? That's a tricky question, because I want to go EVERYWHERE. I've never been outside the country, so I guess I'd start small, and pick somewhere like London. It would be nice to experience international travel while not having to adapt to a language barrier. WHAT IS SOMETHING THAT YOU HAVE ALWAYS DREAMT OF DOING? I've always wanted to work for a touring band. I wouldn't mind being their merch girl, or a tour manager who deals with all the details of travel and coordinating things between the venues and the band, or a tech who makes sure there aren't any problems with any instruments during the show. A job like that would combine my love for travel and music, so it would be perfect. IS THERE ANYTHING ELSE YOU'D LIKE TO TELL US ABOUT YOURSELF? My friends are in a band that got some pretty heavy radio play a few years back, and they made a pretty good name for themselves. My mom and I have traveled all over the country to see them play shows. We've each seen them perform over 100 times. I don't think there are a lot of people who can say that about many bands :)
I had a little spare time yesterday afternoon (and I've been meaning to move up the crowd microtasking learning curve since joining in April) so I created and launched my first CrowdFlower job. It was a simple sentiment analysis job, using Senti to see how people are feeling about CrowdFlower, and it took me just under 30 minutes to create an account, upload a bunch of tweets, and order the job. I started writing this post just as the job is kicking off. Here's all it took. First, I signed up for an account on the CrowdFlower home page. Next, after a quick Google search, I found a nice little tool to grab and save tweets from twDocs and had my tweets all ready to upload. Pretty ugly, but easily uploaded and validated by CrowdFlower Senti. I used the default settings and Senti sent my job to the crowd. Cost - just over $50. When I left to start writing this post, my Senti dashboard looked like this. I left the job to run overnight and had an email in my Inbox this morning letting me know my job was done and linking me to my dashboard with the results, shown below. . As head of marketing, not a bad report, but certainly room for improvement. The tweet content skewed toward event announcements and had less news or issue content. Something to think about for our future social calendar. Overall, a little bit of work, but a tool that could be used by just about any marketing department to show that it's keeping track of company, product or brand sentiment - all with minimal technical skill and a small investment. Now with my first job out of the way, I'm going to try some more ambitious experiments, all aimed at using crowdsourcing to fulfill common marketing needs. Keep an eye on this space.
Thanks to everyone who came to our first Builder Pro Training Day on the East Coast. Builder Pro is the new advanced version of our crowd microtasking platform. We had a full room in midtown and lots of great interaction. Our NHL sentiment analysis job was popular with the Rangers fans in the room. Nathan Zukoff, our Solution Engineer, and Ari Klein, our Director of Customer Success, took the lead in covering the advanced features of Builder Pro. Skilled crowds were the most popular topic of the day, with folks thinking about what kind of skills they could use in their business. The new advanced quality control features were also a hit. Everyone wanted access to their Builder Pro account to try them out. A number of questions came up during the day that I thought I would share:
Q: Can you target particular workers or types of workers? A: Yes. Today you can target workers by location, i.e. country, or skill, such as language proficiency. CrowdFlower is working on building up more and more “skilled crowds.” We are also working to allow more granular demographic targeting. Q: How do you balance between machine learning and crowd microtasks? A: This depends quite a bit on the goal of the job. In general, there are two approaches. First, crowd microtasks can be used to train machine learning to allow more and more future tasks to be done algorithmically. Alternatively, tasks that were not effectively handled algorithmically can be routed to the crowd. More complex jobs can combine both of these at various steps in a workflow. Sometimes we use machine learning to help recommend correct answers, saving our contributors from navigating through large taxonomies or possible answers. Q: Can Builder Pro be accessed via API, for data upload and job control? A: Yes, Builder Pro has a complete set of REST APIs to upload data and control jobs on the platform. The documentation is available here http://crowdflower.com/docs/api Q: How you determine how many “gold” units to create? A: A conservative approach is create gold units that represent 10 to 20 percent of the total number of job units for smaller data sets. This ensures there are enough units to train workers and prevent repeats of the gold training set. The larger the number of units, the smaller the number of units as a percentage. In other words, there is no need to create 50,000 gold units for a 500,000 unit job. For larger data sets, 100-300 gold units are generally appropriate.We're thinking about bringing Builder Pro training to Chicago, Austin and Washington D.C. this year. Let us know if you want us to come, or if you live or work near another large city we should visit. Also, if you are interested in a Builder Pro trial license, let me know.
Recently, I’ve witnessed a rise in the number of crowdsourcing jobs targeted at accomplishing tasks for the medical industry. Every once in a while I’ll be fortunate enough to run across an exceptionally intriguing job that makes use of crowdsourcing in a really unique way. Antonio Foncubierta’s crowdsourcing job is one of those unique tasks I happened to stumble upon. Antonio works for the Business Informatics Institute at the University of Applied Sciences Western Switzerland (HES-SO). The job he created gives the crowd the task of categorizing medical images, such as x-rays, PET scans and CAT scans. Here’s how Antonio describes the problem: _“__The problem with medical images is that they are produced in vast quantities everyday - 30% of the world storage capacity is used by medical images. Retrieval and analysis are quite challenging. In order to train our models and computer-based systems, ground truth is necessary, but it also requires a lot of manual work and time to obtain. Therefore, we thought of using crowdsourcing as a way to obtain quickly basic ground truth.__” _ The job first displays a link to a page, educating the contributor by clearly distinguishing between different types of medical images as an example. The crowd contributor will then view a number of images on the task, similar to the one below. _Example images from Antonio's radiology job. Starting from the top left and moving clockwise, this page shows Magnetic Resonance, Ultrasound, 2D Radiography, and Computer Tomography._ So what are some of Antonio’s tips on conducting a successful crowdsourced job? _“__I think that crowdsourcing is a huge opportunity for researchers, when repetitive tasks need to be performed. However, it is extremely important to have good methods for assessing the quality of the judgments in order to use all the potential of crowdsourcing.__”_ __It’s a creative approach to a pressing problem. This job provides insight into the benefits that can be derived from crowdsourcing, while also demonstrating the close and often intertwined relationship technology and medicine share. With the exponential increase in medical data being generated, crowdsourcing is poised to be the next step in overcoming some of the most crucial obstacles in the medical domain. You can read more about Antonio Foncubierta’s research findings at the links below: * Antonio Foncubierta -Rodríguez and Henning Müller, Ground truth generation in medical imaging: A crowdsourcing based iterative approach, in: Workshop on Crowdsourcing for Multimedia, ACM Multimedia, Nara, Japan, 2012 * Antonio Foncubierta -Rodríguez and Henning Müller, Crowdsourcing opportunities in medical imaging (2013), in: IEEE Communication Society letter
We're excited to let you know that CrowdFlower training is coming to New York. We held one here in San Francisco in March to rave reviews and now we're taking it on the road. Here's what we'll be covering: * CrowdFlower Builder Pro overview and architecture * A walkthrough of Builder Pro features * Creating jobs with Builder Pro * Technology directions and feature roadmap This event is ideal for data scientists, search scientists, developers, researchers, sentiment analysts and others interested in learning how to use microtasking with or in their applications. The content is technical, and we will be writing a little bit of code (in the morning, when we're all fresh). The class will be held in midtown at SaGE Office Suites 276 5th Avenue next Wednesday, May 22. Space is limited, so please register here if you plan on attending. The course is free, and all attendees will get platform credits to put Builder Pro through its paces.
Every day, I love coming to work at CrowdFlower for having the opportunity to build UIs that are changing the way work gets done around the world, conceivably for the next twenty years. We're just beginning to scratch the surface as we empower customers and contributors alike with great interfaces. It's an exciting time to be working with Ember.js to build interfaces like the following as depicted for Skills Tests, our Senti dashboard, and our Contributor dashboard: Though Ember.js has seen some criticism recently, I've been a big fan of its approach to solving the challenges of modern web development for over a year, particularly as the framework saves me time by generating much of the code I would otherwise have to write. Developing with such a bleeding-edge technology hasn't been all rainbows and ponies though. A major hurdle we've faced has been dealing with changes in the framework's API. We currently have three applications in production using THREE different versions of the framework (and we're about to roll out a fourth). The volatility of the API has presented two issues in particular. First, it's tough to codify a set of best practices for architecture and testing to apply uniformly across applications against a moving target. The second issue has been to bring other engineers up-to-speed on those (evolving) frontend practices - we're all solid full-stackers but not as many of us spend as much time in the presentation layer as I do. Our approach to the use of Ember.js in specific speaks to our engineering culture in general: to minimize exposure to risk, we vet bleeding-edge technologies where one or two people are able to become localized centers of excellence. Then, as the technologies mature, those who ran point are able to pay it forward by transferring knowledge across the team through pairing, kitchen conversations, wikis, and code lunches. If you'd like to be a part of creating great user experiences as we change how the world gets works done, join us. _Anthony (@inkredabull) is the Sr. Web Engineer at CrowdFlower. He's presented on Ember.js, maintains the Yeoman Ember.js generator, teaches the Pro Ember.js Class at Marakana, and thanks CrowdFlower for supporting his participation in the Ember.js and OSS communities._
our Office in San Francisco. For All On The East Coast — Not To Worry — We’ll Be Hosting A Day In New York For You Soon! Forward-looking Fortune 500 companies spanning various industries have been forming Crowdsourcing Centers of Excellence and internal Crowdsourcing Programs for years. As more businesses recognize the value of bringing crowdsourcing expertise in-house, we are ready to provide access to our market-leading microtasking platform: Builder Pro. It puts the tools of the CrowdFlower Admin at the fingertips of any company. As the crowdsourcing industry matures, we’ve begun to cross the chasm. In his book Crossing the Chasm, Geoffrey A. Moore describes the chasm as the gap in technology adoption between innovative, early adopting customers and the subsequent group dubbed the early majority. Well, we’ve been listening to our early adopters intently and in response recently launched Builder Pro — a premium (read: totally souped-up) version of our self-service microtasking platform (Builder). To soft launch launch Builder Pro, we invited 20 customers into our office and held an exclusive, full-day training event. While having a lot of fun, we also received instant product feedback. Ah yes, and all attendees received a 1-month free trial of Builder Pro and $99 of credit to test it out :) We brought in guest speakers, had roundtable discussions, and defined immediately valuable use cases of crowdsourcing. Here's a snapshot of what we covered: * How to deconstruct a business process and create tasks for the crowd to perform * Best practices in task monitoring, after the crowd has begun working on a project * A broad survey of successful crowdsourcing projects and interesting applications * Our product roadmap for Builder Pro and the exciting features that will be released * Advanced methods in managing the quality of results for high volume projects Builder Pro Training Day was a resounding success, so we’re taking this show on the road! Our next stop will be New York. If you have feedback, questions or you’d like an invitation to our New York training day, reach out to me or leave a comment below! @arielklein
As a recovering biology major, one of my favorite applications of crowdsourcing is solving public health problems. So far at CrowdFlower, we've enlisted the crowd to kill TB cells, count neurons in mouse cortices, and track epidemics. With some of the new tools we've built in the last year, I'd like to add tracking drug adverse events in social media to the list. The Experiment: We collected all tweets (on Twitter) that contained "Claritin" for the month of October 2012. After some basic filtering for spam tweets offering to sell Viagra with allergy medication, we were left with 4,900 tweets.
Twitter adverse events are for October, 2012; FDA's are rolling 12 month average
*Primary Reports **Total Reports
Over six percent of tweets mention an adverse event. The most common complaint is Claritin's notorious failure rate. It's not surprising that more serious adverse events (convulsions, heart palpitations, shortness of breathe) - are underrepresented in social media while comparatively minor effects (decreased drug efficacy, allergies worse, nausea) are overrepresented on Twitter relative to FDA reports.
Serious health problems are much more likely to be reported through traditional medical channels as opposed to social media. However, many lower impact effects - drug not working, allergies worse, bad interactions between Claritin and other drugs - may be deemed too minor to be reported through traditional medical channels and show up on Twitter in greater volume.
Percentages by Gender
Total Tweets that contain "Claritin"
Tweets w/Adverse Events on Twitter
Adverse Events Reported to FDA
The two genders seem to tweet about Claritin in roughly equal proportion to stated adverse events for FDA - with women making up the majority of adverse events sufferers by almost a 2:1 margin. Women mention more serious conditions (heart palpitations, shortness of breathe, headaches) while men do not. Its unclear if this is because women are more likely to suffer from those conditions, are more likely to tweet about them, or that the sample size is too small to glean anything meaningful.
What It Means:
There is an order of magnitude more data on adverse events available on Twitter than the FDA receives - providing a much more expansive and inclusive dataset for adverse events. With the ability to find many more adverse events than are currently found, social media could provide an early warning detection system for postmarket surveillance of drugs, providing a safer environment for both consumers and pharmaceutical companies. There's incredibly rich data out there, if you're willing to look for it.
Claritin just gave me migraine. Anybody else ever have that happen? -- Susan Cathey Union (@SusanUnion) October 30, 2012Next we looked to adverseevents.com for Claritin's top 10 most common adverse events. The top 10 adverse events are from the FDA Adverse Event Reporting System (FAERS), which collects mandatory adverse event reports from drug manufacturers, and voluntary reports from medical professionals and consumers. Remember the Vioxx and fen-phen recalls? These drugs were withdrawn due to safety concerns as a result of Serious Adverse Events (SAEs) that were reported to the FDA through FAERS. The Results: We used our sentiment analysis product, Senti, to have our crowd review the 4,900 tweets and classify them for relevance, sentiment, author's gender, and any of the top 10 most common adverse events as reported to the FDA. You can explore the data by clicking on the interactive graph below. Interactive Senti Dashboard of one month of Claritin Adverse Events on Twitter built with d3.js and crossfilter We found 295 instances of adverse events in the top 10 categories on Twitter - a number higher than is reported to the FDA. In the last 12 months for which data is reported (Jul2012-Jun2012, data is only available through June 2012 at time of writing), there has been an average of 8 adverse events where Claritin was the primary suspect in reports to the FDA, and 265 total adverse events per month where Claritin was mentioned to the FDA in conjunction with other drugs. Almost all of the cases we found on Twitter were primarily due to Claritin, over 30X THE NUMBER OF PRIMARY EVENTS THAT ARE REPORTED TO THE FDA. ADVERSE EVENT TWITTER FDA - PR* FDA - TR** Dizziness 11 0 16 Convulsions 0 1 6 Heart Palpitations 5 0 7 Shortness of Breathe 4 0 19 Headaches 7 0 16 Drug Effect Decreased 66 0 3 Allergies Worse After Taking Drug 132 0 8 Bad Interaction Between Claritin And Another Drug 40 0 5 Nausea 4 0 19 Insomnia 26 0 9 Other 0 7 157 TOTAL 295 8 265
We're proud to work with YP, the provider of yp.com. YP is known for useful local data at your fingertips. They need accurate data on how relevant their search results are to users: can people easily find what they're looking for? These search quality evaluations are used to achieve a high quality online user experience, to understand the effects of search algorithm tweaks, and to perform frequent competitive analysis. Our new white paper goes into detail on our process for measuring the relevance of search results with the crowd — including findings from a comparative search quality analysis between Google Maps, Bing Local, Yahoo! Local, and YP.com. Check it out (download) and let us know what you think! @arielklein
[Image courtesy of Adarsh Upadhyay] With just two weeks until the 85th Annual Academy Awards, we wanted to leapfrog off some of the recent Oscar buzz and make some predictions of our own. Unlike film critics, at CrowdFlower we prefer not to demonstrate any semblance of independent thought, opting always for the aggregate un-validated opinion of a lot of people we don’t know. It’s that whole ‘wisdom of the crowd’ mentality, we have and considering we have hundreds of thousands of people ready to provide their opinion at a moment’s notice I’d say you can’t blame us. To make our predictions we created a task in which we asked people in the United States to a) guess how many Oscar winners they would correctly predict and then b) have them make their predictions. Sure enough, in less than two hours, we had collected over 500 responses from people hailing from 46 of the 50 states. In short, the people had spoken. Below you can see the aggregate crowd wisdom that comprises CrowdFlower’s official Oscar predictions. Items at the top of the list had more crowd agreement. Our Best Picture pick is toward the bottom of the list, with only 26% of the vote (still more than any other contender.) Looks like Lincoln is going home on top with five awards, including Best Picture, Best Director, and Best Actor. This, of course, is not surprising as we collected the predictions on Lincoln’s birthday and last Monday was President’s Day. Anything else would be unpatriotic. We were a bit concerned, however, by Anne Hathaway’s clear victory over Sally Field (Lincoln) for Best Supporting Actress in her role as Fantine in Les Miserables. Let us not forget that the French got that whole democratic revolution idea from us. In order to make sure we were putting our best hand forward, once our Contributors made their initial selections we informed them that they would be bonused for any correct predictions they made – but only if they correctly predicted at least as many as they thought they would. This ultimately didn’t result in any change in the aggregate response, but it did highlight some interesting social dynamics. Men initially had more confidence than woman in their responses, believing they’d correctly predict on average 1 more winner than women. Men, however, were twice as likely to doubt themselves and change their answers once they found out bonuses were involved. Perhaps women are just more honest from the start or men just more greedy. And speaking of honesty, it turns out that nearly 30% of our respondents admitted they haven’t even watched a single Oscar nominated film, which begs the question of what wisdom the crowd is really working with in this case with so many seemingly picking at random. (We could have limited our results to people that had seen the movies, but wanted to see if seeing the films would affect people’s accuracy.) So, as we often do, we’ve decided to put the crowd to the test. Can the crowd beat an expert, as the old saying suggests? And even more simply, can the crowd even beat a selection of winners that were randomly generated? We’ll see after the big show.
Here at CrowdFlower, we take pride in operating a crowdsourcing platform that makes tasks available to anyone with access to the Internet. This practice, combined with our state of the art quality control methods, has allowed us to expand to the world’s largest pool of crowdsourcing contributors. As we begin to tackle projects with higher complexity, we must evolve to better understand and utilize the existing skills and specialized abilities of the crowd. In order to fulfill our vision of becoming a platform for all kinds of tasks, we needed to create a process that was extremely malleable. A huge variety of types of tasks are available to contributors on the CrowdFlower platform, each with unique requirements. We wanted to find a solution that could work for not just the tasks hosted on our platform today but applications that we haven’t even thought of yet. After looking at a number of options, we designed a system that evaluates existing trust scores and allows eligible contributors to voluntarily measure their skills and receive a skill rating. This approach allows us to offer targeted tasks to contributors by any skill that we can measure for, and quality has already improved and new applications are possible. We’ve built custom skills ranging from basic English writing to knowledge of contemporary fashion. What's next? You tell us! We would love to hear from you about the skill types you need. Perhaps you need a group of sports experts? No problem. How about experienced data miners? We can do that too. Some of the badges currently being awarded to skilled contributors: image moderation, sentiment analysis, and beta testing
The Baltimore Ravens and the San Francisco 49ers battle it out on Sunday. Snacks have been acquired. Storylines have been established. Someone, somewhere is carefully dusting a new 72″ plasma. We're ready for our biggest national holiday. However, the crucial question of who Twitter users are rooting for hasn't been answered. It turns out that figuring out the answer to this question is pretty complicated: WHY NOT JUST SEARCH FOR "ROOTING RAVENS" AND "ROOTING 49ERS" AND COMPARE? Unfortunately, Twitter doesn't report the number of search results for different searches, so we wouldn't know how many people tweeted "rooting Ravens" or "rooting 49ers". As we'll see, this isn't an accurate metric in any case. WHY NOT USED AN AUTOMATED TOOL? Unfortunately, automated sentiment analysis tools can't answer this type of question accurately. To understand why, try to guess how automated tools are likely to assess the following examples:
_Good Gawd "@BreeOlson: Who's rooting_ for the #Ravens this #_SuperBowl_ ? I am! http://t.co/k2UKnApR" _Follow if you're rooting_ for the 49′ers! RT if you're _rooting_ for the Ravens! #_SUPERBOWL_ _"@989RadioNow: who are you rooting_ for in the #_SuperBowl_ ? my vote goes to the #RAVENS" 49ers here!!!These aren't isolated examples; in order to express themselves in 140 characters or less, people often write in ways that are difficult for machines to interpret. Luckily, CrowdFlower has a better tool for the job.. We pulled over 1,000 tweets referencing the Super Bowl and "rooting", and had the crowd decide if the tweet was rooting for the 49ers, the Ravens, or neither. It took about fifteen minutes to set up a CrowdFlower task that got thousands of answers from hundreds of human beings all over the world. SO, WHAT WAS THE ANSWER? In news that will surprise no one, Twitter, like God, is rooting for … Ok, maybe we let our hometown bias get the better of us there. Here's how the data came out: NUMBER ROOTING FOR THE 49ERS VS THE RAVENS So Twitter users who mention the Super Bowl and "rooting" in their tweet ARE ROOTING FOR THE RAVENS ABOUT TWICE AS OFTEN AS THEY'RE ROOTING FOR THE 49ERS. Let's pass over this embarrassing fact as quickly as possible. Why all the tweets that didn't express an opinion? We took a closer look at those tweets. A lot of them looked like this:
_The Superbowl is this Sunday! Who are you rooting for?_So we formed a hypothesis: a lot of the neutral tweets were promotional: companies or other organizations trying to start a conversation with their followers about the Super Bowl. To answer this question, we had the crowd look at the Twitter profiles and tweets of everyone whose tweet had been rated Neutral. We asked them to decide if the user was a company or a person. Again, it took about fifteen minutes to set up a task that got thousands of answers on our dataset from all over the world. Here's what we got: NUMBER OF NEUTRAL TWEETS FROM COMPANIES AND FROM PEOPLE ABOUT 40% of the Neutral tweets came from companies. That means that there were a very high number of tweets in the original dataset from companies. It makes sense that companies would try to start conversations with their customers around the Super Bowl (those who live in glass houses … ), but it's still surprising to get so many in a mostly random pull of Twitter data. One explanation might be that there are fewer companies than people on Twitter, but companies tweet more often. Food for future research. SO IN SUMMARY: * Twitter's rooting for the Ravens * A lot of Twitter traffic about the Super Bowl comes from company accounts * The people who are most excited about Sunday are all Beyonce fans (you'll have to trust me on this) Have fun Sunday. _And if you're interested in analyzing Twitter with the crowd, check out our dead-simple, crowd-powered tool Senti._
_Update:_ this piece has been featured on Mashable. Apple Maps, which replaced Google Maps for all iPhone and iPad users in the most recent version of iOS, has been receiving a lot of attention. Things died down after a while after Tim Cook’s apology, but flared up again last week when travelers in Australia were stranded when they were directed into the Outback. While we all enjoy taking potshots at the cool kids, as data nerds, it bothers us that all we’ve had to go on in the Apple Maps discussion is anecdotal evidence. So we decided to assemble a dataset that would
let us take potshots with authority give some solid answers.
Apple Maps has been criticized for a few things, from how it interprets your search to the directions it gives. We asked a simpler question: WHEN YOU ASK FOR A RESTAURANT, HOTEL OR OTHER BUSINESS, HOW OFTEN DO YOU GET THE RIGHT LOCATION?
Extremely Brief Summary
Apple Maps in the US is bad enough to be noticeable: you probably won't throw away your iPhone, although you may miss a dinner reservation. Those of you using Apple Maps in the UK, however, might want to keep emergency food and water in the car.
For more detail and statistics, read on …
How We Did It
We started with a list of 1,000 US businesses in our database, then added 100 UK businesses to give some idea of international differences. We had our crowd find the official websites of these businesses and extract current address information from the website. This was our reference data.
We had the crowd pull the same information from Apple Maps, Google Maps, and Bing. In order to replicate the search experience of a typical user, we had people search for “business name, city” first before trying different variations.
Then, we compared the results. In the case of major errors we investigated carefully - many UK addresses lack street numbers and can be described multiple ways.
Our results have two parts: first, if I search for a business, will I get a result? Second, will that result be accurate?
_Percentage of businesses found_
First we tested how many of the businesses we could find on each service. In results that will surprise no one, Google Maps has listings for the most businesses in the US and UK, with 89% coverage in the US and 91% coverage in the UK. Apple Maps is credible in the US with 74% of businesses found, but with 47% coverage in the UK the phone book starts to look like a real option. Bing is somewhat better than Apple but not great, with 79% coverage in the US and 57% coverage in the UK.
Not being able to find a business is one thing. Getting an incorrect listing might be even worse.
_Percentage of listings with major errors_
We consider a major error to be anything that puts you a block or more from your intended destination. With a 3.4% major error rate in the US (compared to 1.1% for Google Maps and 1.3% for Bing) there’s a decent chance Apple Maps will send you in the wrong direction. Even small errors can be frustrating: witness the difference between the Apple Maps result for Nick's Crossroad Cafe in Albuquerque with the actual result. (We're using a Google Maps screenshot for convenience, but our address data comes from Nick's official website.)
_The difference between Central Ave NW and Central Ave NE_
In the UK, not only are the more incorrect listings, but you might find yourself even further off-course. Here's the Apple Maps result for Advance Gym in Reading, UK compared to the actual result. You'll also notice that the Apple Maps version is missing a lot of detail compared to the Google Maps version.
_About four miles_
Note that we tried to be forgiving in making our evaluation: for example, if Apple Maps had multiple listings and at least one was valid, we accepted that as valid.
Will I get where I want to go?
So when you search for a business, what are the chances that you'll find the business you want AND the address will be accurate?
With Apple maps you have a 71% shot, compared to 88% for Google Maps and 78% for Bing. But with a 3.4% error rate, YOU’RE THREE TIMES AS LIKELY TO BE SENT ON A WILD GOOSE CHASE WITH APPLE MAPS.
In the UK, the situation is dicier: you’ll get a good listing 33% for Apple Maps, compared to 88% for Google Maps and 55% for Bing. And 30% of the time, you'll get a listing, but it'll be incorrect.
LOCAL DATA IS HARD. Small businesses are closing and starting up all the time. Streets are being re-routed. And Apple is just getting into this game. We'll be very interested to see how Apple's data improves over time.
Want to know more about how CrowdFlower handles business data? Get in touch or read about how we provide high-quality SMB data for sales and marketing.