The ‘Uberisation’ of Dashcam Footage

Article By : Matthew Burgess

A startup looks to monetise the footage captured by your dashboard camera.

Many car accidents aren’t clear cut. If there is a disagreement regarding who was at fault, it becomes a battle of ‘he said, she said’.

Now you're in a spot of bother.

It’s in incidents such as these that dashboard cameras (dash cams) have proven highly effective in capturing accidents, and other mishaps that may occur on the road.

Dash cams are quickly becoming a mainstay in our vehicles. Profoundly useful when it comes to providing evidence of an accident, capturing astronomical events or even allowing us to revel in the idiocy of others.

But what do we with our footage when we haven’t had a near miss with an aeroplane?

“We want to utilise the data you record every day”, said Chun-Ting Chiou, CEO & co-founder, OmniEyes. “Footage is constantly being overwritten until something bad happens, but how many car accidents does any individual have a year?”

Chun-Ting Chiou

Using the footage from your dashcam, a combination of computer vision and AI, OmniEyes have developed a solution which can automatically detect dangerous driving.

Fleet management and driver behaviour

Fleet management solutions have existed for years but when it comes to driver monitoring, they’re becoming ever more intrusive but are they effective?

A popular solution is to place a camera inside the vehicle which is focused on the driver. Viewed at times of suspicion or following an accident, the footage can show if the driver was distracted or possible fatigued. Another common solution is the use of on-board diagnostics (OBD-II). Similar to a black box on an aeroplane, OBD-II collects data on a number of parameters which can then be used to indicate driver behaviour.

When asked about the value of such solutions, Chou commented, “current solutions can’t tell you if a driver has run a red light or made an illegal turn because the camera is facing the driver”. Chou continued: “If you use indirect evidence to criticise a driver then they’ll probably tell you that they’re doing what they have to do to avoid an accident or blame other drivers”.

While helpful, the limitations are apparent. This is why we have seen a proliferation of dashcams. They can offer tangible evidence of a driver’s behaviour before, after and during an accident. Due to the overwhelming amount of footage collected daily, this still provides scant information on long-term driver behaviour.

A person can spot traffic violations easily, but there’s simply too many hours of dashcam footage to review manually. Today in Taiwan delivery drivers will drive an average of 50km per day, with taxi drivers reaching around 120km. While these numbers may seem small to some, when multiplied across whole fleets this equates to thousands of hours of inner-city driving.

Companies are often forced to be reactive; they might have an idea which of their drivers is a liability, but it often takes an accident for confirmation. Placing their platform in a camera which faces the road, OmniEyes are able to use their AI to constantly monitor driver behaviour while “alleviating any privacy issues” for the driver.

Training the AI to recognise signage and traffic lights doesn’t in itself indicate that a driver has violated any traffic laws. Chou says that the challenge comes from marrying up a vehicles motion status with road signage, “once you know the difference between stopped vs moving; turning left vs turning right, then you can determine illegal behaviour”.

Omnieyes Motion Status

The tails on the green markers indicate the direction of the car. This evidence, plus the red traffic light allow the AI to determine that a traffic violation has taken place. (Source: OmniEyes)

An example would be the use of GPS for location and speed tracking. While commonly used, GPS can be unreliable. GPS accuracy relies on a number of factors, satellite geometry, atmospheric conditions, and the device used. Another factor which is especially prevalent in cities is signal blockage.

A common traffic violation would be a vehicle not stopping at a stop sign, but “GPS isn’t accurate enough to help you here”.

“We don’t know the locations of the traffic signals or signs, there’s actually no mapping done at all. In terms of traffic violations, our AI just uses footage from the camera”.

With the AI placed in the vehicle, all inferences take place locally, with data being processed in real-time.

Video Uberisation

OmniEyes have struck up a partnership with local logistics company Taiwan Pelican Express. Through this relationship, OmniEyes are now collecting 50,000km of footage daily.

While traffic violations may occur more frequently than accidents, that’s still a large amount of data for a company to manage. Chou explains this is why OmniEyes “extracts the gold” from the raw footage before it’s “sent to the cloud for final processing”.

When a traffic violation occurs, only meta data is sent to the cloud. This includes the location and time, plus a small video clip of the infringement. What was once buried and overwritten is now being processed, labelled and packaged for consumption.

With little value associated with the footage attained through dashcams, data is often erased or kept in-house. OmniEyes plan to change this mentality and develop a new ecosystem which Chou has labelled “video Uberisation”.

“We are developing a brand-new ecosystem to monetise video. Not just for the benefit of drivers but for all users”, says Chiou. “We will become the OmniEyes sharing platform. The data will come in from the contributors and we’ll pack it together as a consistent unified data service”.

Producers of digital roadmaps are targeting dynamic mapping as a point of difference. They want to give their customers more than simple GPS data and Chou believes this will be the start of video Uberisation.

Currently, due to the limitations of GPS, digital roadmaps can only define traffic at the road level. Through a variation of the aforementioned “gold extraction” performed by OmniEyes, they are able to utilise this data for lane level detection.

Using real-time camera footage collected from dashcams, “we know the difference between an accident or a red light”, says Chiou. “At the moment a GPS can tell you if the traffic is either northbound or southbound, but it can’t determine the traffic level of each individual lane moving northbound”.

For commercial fleets, this creates a whole new revenue stream which has essentially been derived from a by-product.

Who owns the data?

Logistics is a competitive space, 3-5 days for delivery used to be considered fast. It didn’t take long before we came to expect next day delivery, which has already progressed to same day delivery. To be able to meet such demands commercial fleets have trade secrets, for instance, a company wouldn't want a competitor to access their stopping times and delivery routes.

With the OmniEyes platform absorbing so much data, who owns it and how can you project these trade secrets?

With companies retaining ownership of their raw footage, only the metadata which has been extracted using the OmniEyes platform will be used. Chou explains that a profit-sharing scheme will be in place to “incentivise the sharing of data” and that their customers are excited by the “idea of video Uberisation”.

The name OmniEyes was inspired by the term ‘omnipresent’ and resembles their desire to develop a ubiquitous sensor. Intrigued by the idea of having a fleet of AI enabled cameras prowling the streets, OmniEyes have already fielded multiple request to develop a facial recognition component.

Concerned about the ethics of equipping such a platform with facial recognition, Chou is clear that they “don’t collect any facial data at all”.

Subscribe to Newsletter

Test Qr code text s ss