top of page
  • Writer's pictureNATALIA SOUSA

From Toy Story to Self-Driving: Pixar Veterans Join Aurora to Advance Company's Simulation Efforts

Aurora announced it aims to accelerate its simulation efforts by welcoming the team from Colrspace – a creative technology start-up made up of Pixar veterans behind the computer-generated imagery (CGI) magic of iconic movie series like "Toy Story" and "Cars."

Colrspace's team bridges state-of-the-art computer graphics and machine learning with a data-driven approach to reconstructing 3D objects and materials, deployable in real-world environments. As Aurora continues to leverage and expand its Virtual Testing Suite, Colrspace's technology will bring scalability and increased accuracy to the high fidelity virtual worlds that underpin Aurora's unique sensor simulation capabilities.

"We're able to move quickly because of the smart foundational investments we've made in our technology and the immensely talented people who join our company," said Aurora Co-founder and CEO Chris Urmson. "Colrspace's team and technology will enable us to move even faster in developing simulation and machine learning tools, accelerating our progress towards delivering the Aurora Driver."

With its Virtual Testing Suite, Aurora runs millions of simulations every day. This allows the company to train and evaluate the Aurora Driver's software stack across many scenarios and driving conditions, finding edge cases and catching errors early, well before the software is loaded onto vehicles. Ultimately, simulation testing drives the development of the Aurora Driver. It is the quickest and safest way to train and test Aurora's self-driving technology – estimated to be equivalent to more than 50,000 trucks driving continuously. This, combined with thoughtful on-road testing, will allow the company to deliver the Aurora Driver safely and quickly at scale.

​​As for Colrspace, its innovation is in Protocolr, which is based on an input image and infers texture and other material properties of an object. Key to the process is a neural network that models the processing of the camera pipeline, and couples this with a differentiable image renderer to enable "inverse rendering," which computes the 3D scene and materials that would produce an image identical to the input photo. Aurora believes this technology will provide a unique advantage in building virtual worlds almost indistinguishable from the real one. This is critical because the more realistic the virtual world, the more influential the testing can be.

bottom of page