How Model-Prime can Help as Your Robotics Data Needs Scale
Robotics data only gets denser, and more voluminous as you scale your company. We outline the data and workflow problems that begin to present themselves as you try to wrangle your mountain of robotics data. Needless to say building robots is hard enough, we'd love to help you focus on robotics engineering, not data engineering as your data needs scale.
Preparing for scale
If you’re reading this, you probably work at a robotics company. Maybe you’ve realized just how much data your robotics fleet is generating as it starts to scale. Maybe you’re wondering what lurks around the next corner as your company grows. This post aims to serve as a guide for what to expect and how Model-Prime’s platform can help.
“I want to see what data is on my robot and push a code change.”
That’s the common use case that exists when your company is small.
Scaling your company
Robotics companies often scale across a few axes that affect both data consumption and data generation.
Hiring more people who will consume robot log data:
More robotics developers
More non-developers - Analysts - Data Scientists - Legal - Triage analysts
Increasing the robot fleet size, resulting in the generation of a lot more data:
Generally, sensor kits create increasingly dense data over time.
Different hardware/software configurations create inconsistent logs across your entire log collection.
Sometimes these differences are subtle and hard to discern.
People, People, and More People
As your org grows, your use cases start to expand:
I’m a robotics org and want performant ETL so I can quickly find and view my data.
I’m a robotics developer and want to be able to get data from the last build I pushed to see how it performed
I’m a robotics developer and want to search for logs to help me build a new ML model.
I’m an analyst looking for trends in robotics performance data.
I’m leading a robotics engineering org and need:
A performant offboard format so my tools can read it easily.
To understand what data is useful and not useful to reduce data storage costs.
To manage my robot log data according to legal and compliance requirements.
To implement an organized data retention policy driven by how my team uses data.
A basic toolkit for robot log operations.
Performant log search for multiple job functions in my company.
I’m a triage analyst and need to get the newest logs and annotate them.
I’m a robotics developer and need labeled data that is tagged consistently so I can build my models.
I’m a data scientist/roboticist/analyst and I want to ETL data from my robotics logs into a table using self-service tools.
I’m a robotics developer and I want to create a tagger to automatically tag log attributes as they are uploaded.
I’m a robotics developer and want to search for logs, place them in a dataset, be able to revise the data set, and adjust the training, validation, and holdout parameters so I can train my models and backtest them to ensure I’m not sub-optimizing performance.
I’m a triage analyst/Robotics developer/legal representative and need a way to share logs or log snippets easily.
I’m an executive and need dashboards for daily/weekly/monthly decisions.
And more …
As you grow the number of people in your org, you have more people consuming log data. The inability to find logs becomes exponentially expensive.
Furthermore, your organization starts becoming more dependent on having organized ways of getting logs into ML/DL and simulation workflows. You also have to make sure that the logs that you need for ML, offline testing, and other workflows are preserved.
The simple workflow that we outlined above morphs the workflow below.
This workflow is notable in a few ways:
It’s far more complex.
It involves a lot more automation.
It involves a lot more collaboration between people at the company.
Currently, no tooling exists to help move logs through this workflow.
Logs, Logs, and More Logs
With fewer people and fewer robots, you are able to sort through the data and get to what you need using manual methods. When you have a handful of robots generating data, your team members may even be able to write small scripts to isolate events of interest.
At a small scale, we’ve even encountered startups that don’t even offload their logs – they simply SSH into their robots and get the data they need.
As your data scales, you will encounter more and more challenges with the above approaches. The volume of data is not only driven by the robot development and production fleets, but also by development processes such as simulation and software validation.
Manually playing back all log data to identify useful data becomes untenable. Efficient navigation and utilization of data by developers and other team members becomes increasingly challenging or impossible without more capabilities:
Tools to autotag and extract useful data.
Tools to enrich logs with metadata that is meaningful to your development workflows.
Mechanisms to feed only the required data into other workflows, such as test, simulation, and ML training.
Performant search of log data.
Symptoms of a data problem
Without appropriate tooling and capabilities as mentioned above, a number of problems can develop:
Lost or deleted critical log data
Proliferation of spreadsheets to keep track of logs, annotations and other data
High storage costs from inefficient data retention policies
Proliferation of internal tools and scripts among different developers, resulting in inconsistent workflows
Lost developer time
Lost organizational progress
Wasted time and money
The Model-Prime data platform offers a set of services and tools for your team to improve efficiency for your organization, including:
Easy and scalable log metadata ingestion to allow your team members to find the data they need, and quickly.
Note: This process occurs after your logs are offloaded to the cloud or other storage.
Self-Service tools to get metrics off your logs and aggregate them
Simple metadata enrichment of log data
Performant search of log data
Log set management, including versioning, to support data science, ML, and DL workflows
Snipetting tools to create log segments that are consistent and consumable by downstream workflows and tools, reducing the overall data volumes processed and stored
Data retention policy management
Reduction in data consumer waiting time for logs to be available for metadata and search
Reduction in complexity of enriching log data with metadata that is relevant to developer and organizational workflows
Reduction in data consumer time spent searching for logs
Reduction in effort to integrate log data into downstream workflows, such as analytics, simulation, and ML