In this special guest feature, Radha Basu, Founder and CEO of iMerit, outlines how edge cases as simple as traffic cones will continue to be autonomous vehicle technology’s biggest challenge, but solving them will require more than annotating images to train an algorithm. It will also require a combination of smarter algorithms, data sharing, and network intelligence. iMerit is a global AI data solutions company delivering high-quality data that powers machine learning and AI applications for Fortune 500 companies. Under Radha’s leadership, iMerit employs an inclusive workforce of more than 5,000 people worldwide, with 80% from underserved communities and 54% are women. Previously, Radha was the SupportSoft Chairwoman and CEO. She spent 20 years at Hewlett Packard, where she grew HP’s electronic software division into a $1.2 billion business and launched HP in India. Radha has received accolades including the Global Thinkers Forum Award, UN Women-ITU Gender-Equality Mainstreaming Technology Award, Silicon Valley Business Journal Women of Influence Award, Top 25 Women of the Web and CEO of the Year.
What does an orange traffic cone in the road mean? Depending on where it’s placed on the road and context in which it exists, it can mean a multitude of things. Is it covering a pothole that needs to be filled? Maybe it’s signaling caution because a truck has pulled over to the side of the road. Was it left behind after a lane was freshly painted? Did it fall off the back of a work truck and means nothing? Our human brains can interpret the ‘why’ for the traffic cone in a nanosecond. A car can’t.
So an autonomous vehicle may stop at each cone and assess what to do next. What happens when these cones are placed every 20 feet on a road for a span of 300 feet, signaling to the cars not to drive on a freshly painted lane line? Imagine the autonomous vehicle starting, stopping, starting, stopping at each cone. Suddenly, the driver has to take over to correct the vehicle’s automatic response. The technology in the car doesn’t understand that a cone in the road doesn’t always mean that the vehicle must stop. Edge cases as simple as the traffic cone will continue to be AV technology’s biggest challenge, but solving them will require more than annotating images to train an algorithm. It will also require a combination of smarter algorithms, data sharing, and network intelligence.
Proactively addressing edge cases before the last mile
Most autonomous vehicle edge cases occur in the last mile, like in the case of the traffic cone. In these situations, an AV cannot properly address or identify an unusual obstacle, circumstance or happening on the road, which can result in an accident or worse. By nature, these situations and scenes are unstructured, with no two scenes looking exactly the same. This makes it difficult for the ML and AI to learn from it. But sometimes edge cases appear when we make assumptions about how images should be annotated to train the algorithm. An example of that is training a self-driving car to recognize the road’s shoulder as a boundary to avoid driving off the road or crashing. But what happens when a road has no shoulder? The vehicle doesn’t respond correctly when driving on a road with no shoulder. The assumption that a road always has a shoulder is the edge case. Creating an annotation workflow that teaches the computer to identify the sides of the roads instead of a road shoulder proactively solves this edge case before the car gets to that last mile.
Solving edge cases in real time requires data sharing
Today, it’s possible to better solve edge cases in real time in the context in which they happen, but it requires better data sharing between the AV companies and the municipalities where the vehicles are operating. For instance, a public works department has a schedule of planned construction projects and road closures. By communicating that schedule proactively with the AV partners driving on the roads, they can alert the vehicles to avoid these areas. This simple data sharing is relatively easy, but it’s not done to the extent that it could.
In the future, imagine each city has a group of vehicles with 20+ sensors that measure where the garbage trucks are, which streets have potholes or where traffic is building up before a basketball game. This data is then shared directly with the AVs on the road, making these smart cars even smarter. This cloud-to-cloud sharing of information gets us to a place where AVs can proactively monitor and respond to the changing environment of a bustling city, edge cases and all. Imagine this technology applied to a space beyond a public street, such as a hospital system campus. By dynamically mapping pedestrian movements from one hospital building to another within the medical district, AVs can solve other needs, such as transporting patients from one facility to another.
These are the baby steps to the final stages of AV going into the field and becoming more commonplace. Until then, proactively addressing edge cases before AVs hit the road is the key.
Sign up for the free insideAI News newsletter.
Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1