Guide to Tesla Hiding Their Crash Data

Or how to spot when a car company is playing “trust me bro” with deadly data

If you’re investigating a Tesla crash case, or one of the people trying to figure out why a Cybertruck killed three college students in Piedmont, you’ve probably got questions about Tesla’s infamous data production habits. They generate impressive filings—spreadsheets, timestamps, lots of numbers—yet somehow still obfuscate and omit what really happened.

That’s the game, an ages old problem, typically regulated within industries that aim to prevent book “cookers” and cheats.

Source: AP. William Lerach, an attorney representing shareholders suing 29 current and former Enron Corp. executives and directors, carries a box of shredded documents into federal court in Houston. He was prepared to ask the judge to ban any shredding by Enron or its former auditor, Arthur Andersen.

This guide will help shine a light into what should be seen, what Tesla probably has been hiding, and how to call out their bullshit in technical terms that will survive a Daubert challenge.

The Musk of a Con

Modern vehicles are data centers on wheels. Everything that happens—every steering input, every brake application, every sensor reading, every system error—gets recorded on data buses called CAN (Controller Area Network). CAN bus data is a bit like the plane’s black box, except:

  • It records thousands of signals per second
  • The manufacturer controls whether it can be decoded
  • There’s no FAA equivalent forcing transparency to save lives

When a Tesla crashes, the vehicle presumably has recorded everything. The question with Tesla, unfortunately, is what they will allow the public to see, and what will they hide to avoid accountability?

Known Unknowns

Tesla’s typical objection to full data production is actually disinformation:

“Your Honor, the vehicle records thousands of signals across multiple data buses. Producing all of this data (poor us) would be unduly burdensome (poor us) and would include proprietary information (poor us) not relevant to this incident. We have provided plaintiff with what we deemed relevant data points from the time period immediately preceding the crash.”

Sounds reasonable, right?

It’s not. Here’s why.

“Subset” = Shell Game

For a 10-minute driving window, you’d get this from the CAN:

  • Size: 5-10 GB of raw data (uncompressed)
  • Messages: Millions of individual CAN messages
  • Signals: Thousands of decoded data points per second

Does this look “unduly burdensome” to anyone?

No. Modern tools routinely handle 100GB+ CAN logs. While initial processing takes hours, comprehensive analysis may take days—but this is standard accident reconstruction work that experts perform routinely. The data volume is NOT a legitimate barrier.

Tesla sounds absurdly lazy and cheap, besides being obstructive and opaque.

The real reason Tesla doesn’t want to produce normal data: Complete data exposes their engineering defects and system failures. It allows others to judge them for what they really are.

What Tesla Allows

Their “EDR summary” will probably be stripped down to something like this:

Time    Speed   Throttle   Brake   Steering
-5.0    78      79%        0%      0°
-4.5    79      79%        0%      0°
-4.0    80      79%        0%      5°
-3.5    81      61%        0%      15°
-3.0    82      61%        5%      25°

This tells you what happened but NOT why it happened.

It’s like reading altitude and airspeed for a plane crash, where Boeing refuses to disclose all this:

  • Engine performance data
  • Control surface positions
  • Pilot inputs
  • System warnings
  • Cockpit voice recorder

Legally sufficient? It shouldn’t be. Imagine turning a history paper in that is just a list of positions and dates. George Washington was at LAT/LON at this time and then LAT/LON at this time. The end.

Technically adequate for basic investigations, let alone an outcome for root cause analysis? Absolutely not.

Decoder Ring Gap

To decode raw CAN bus data into human-readable signals, you need a way to interpret it, meaning a DBC file (CAN database file — e.g. https://github.com/joshwardell/model3dbc). Josh Wardell explains why Tesla is worse than usual.

“It was all manual work. I logged CAN data, dumped it into Excel, and spent hours processing it. A 10-second log of pressing the accelerator would take days to analyze. I eventually built up a DBC file, first with five signals, then 10, then 100. Tesla forums helped, but Tesla itself provided no documentation.”

“One major challenge is that Tesla updates its software constantly, changing signals with every update. Most automakers don’t do this. For two years, I had to maintain the Model 3 CAN signal database manually, testing updates to see what broke. Later, the Tesla hacking community reverse-engineered firmware to extract CAN definitions, taking us from a few hundred signals to over 3,000.”

Think of it this way:

  • Raw CAN message: ID: 0x123, Data: [45 A2 F1 08 00 00 C4 1B]
  • With DBC file: “Steering Wheel Angle: 45.2 degrees, Steering Velocity: 15 deg/sec”

Without Tesla’s DBC file, it’s still raw codes. The codes show systems are talking, but what they’re saying isn’t decoded yet.

If you buy an old dashboard off EBay, hook up some alligator clips to the wires and fire it up, you’ll see a stream of such raw messages. If you capture a ton of those messages and then replay them to the dashboard, you may be able to reverse engineer all the codes, but it’s a real puzzle.

Tesla has the complete DBC file. They should be compelled to release it for investigations along with full data.

Piedmont Cybertruck Case

The crash has these technical red flags:

  1. “Autopilot State Not Available” from 03:02:02 until crash
  2. Rear camera stopped recording at 03:06:02 during a turn
  3. 52-second gap (03:06:02-03:06:54) with no camera data before impact
  4. Drive inverter recall for MOSFET defects (sudden unintended acceleration risk)
  5. Steer-by-wire system (no mechanical steering backup)

To understand what actually happened, you need:

1. Complete Steering System Data

  • Steering wheel angle: What driver commanded
  • Steering wheel torque: Driver effort/feedback motor response
  • Front wheel angle (actual): What the wheels actually did
  • Steering motor current: Actuator effort
  • Steering system faults: Any errors detected
  • Redundancy status: Both channels functional?
  • Response latency: Time from command to wheel movement

Key Question: Did the wheels turn as much as the driver commanded?

With steer-by-wire (no mechanical connection), if the computer fails or communication drops, you can turn the steering wheel and nothing happens. That’s not driver error. That’s system failure.

2. Complete Drive Inverter Data

  • Torque command (desired): What computer requested
  • Torque actual (output): What inverter delivered
  • MOSFET temperature: Thermal stress on recalled component
  • Gate drive voltage: MOSFET control signal
  • Inverter fault codes: Self-diagnostics
  • Current per phase: Motor winding behavior
  • DC bus voltage: Power supply status

Key Question: Did the recalled inverter output unintended torque?

The Cybertruck drive inverter was recalled for MOSFET defects. The failure modes are more complex than Tesla admits:

  • Complete failure: Loss of propulsion (what Tesla publicly acknowledges)
  • Partial degradation: Increased on-state resistance (Rdson) causes excessive I²R heating
  • Thermal cascade: Heat from degraded MOSFETs triggers asymmetric phase currents
  • Torque disturbances: Asymmetric current delivery creates unpredictable torque, especially dangerous during turns when steering and propulsion loads interact
  • Gate driver failure: Can cause MOSFETs to remain partially conducting, delivering unintended torque
  • Regenerative braking failure: Loss of deceleration control when driver releases throttle

The recalled MOSFET defects create thermal stress conditions that can cause unpredictable torque delivery during high-demand maneuvers like turning—exactly when this crash occurred.

3. Complete Autopilot/FSD System Data

  • System state: What mode was active
  • State transition reason: Why “Not Available” at 03:02:02
  • Inter-computer heartbeat: Communication health
  • Processing load: CPU/GPU utilization
  • Sensor validity flags: Which sensors trusted
  • Error logs: Fault detection

Key Question: What caused “Not Available” and how did it affect other systems?

“Autopilot State Not Available” doesn’t mean “Autopilot was off.” It means the system couldn’t report its state. This requires distinguishing between:

  • Logged state transition: System actively reported entering “Not Available” mode (error code will exist)
  • Absence of telemetry: No status reports received from Autopilot computer (communication failure)
  • Normal “Not Available”: Brief states during cold start or mode transitions (typically seconds)
  • Abnormal “Not Available”: Extended duration indicating system failure

The 4-minute-52-second duration from 03:02:02 until crash is abnormally long. Tesla must produce:

  • Fleet-wide statistics on “Autopilot Not Available” duration to establish what’s normal
  • Specific error codes logged at 03:02:02
  • Which computer failed to communicate
  • What recovery attempts the system made

Even when “off,” the Autopilot computer provides data to other vehicle systems. If it’s not communicating, other systems may malfunction.

4. Complete Camera/Vision System Data

  • Recording status per camera: Which were active
  • Frame rate: Processing keeping up?
  • Storage system health: Disk errors?
  • Image processing load: Vision AI performance
  • Object detection status: What was “seen”

Key Question: Why did recording stop during the turn at 03:06:02?

Camera recording doesn’t just randomly stop. Possible causes:

  • Storage system failure
  • Power interruption
  • Thermal shutdown
  • System crash
  • Processing overload

The timing matters: Why during a turn? Turns are high-demand events—the vehicle processes steering angle, yaw rate, lateral acceleration, multiple sensor inputs simultaneously. If the system was marginal, a turn could push it over the edge.

5. Complete Vehicle Dynamics Data

  • Wheel speed (all 4): Actual vehicle motion
  • Yaw rate: Rotation rate
  • Lateral acceleration: Side forces
  • Longitudinal acceleration: Speed changes
  • Stability control mode: What intervention active
  • ABS status: Brake modulation

Key Question: Was the vehicle responding normally to steering inputs?

6. Complete Power Distribution Data

  • 12V bus voltage and current
  • High voltage battery management system
  • DC-DC converter status
  • Power supply sequencing to safety-critical systems
  • Brownout/voltage sag events

7. Complete Software Configuration Data

  • Firmware version: Build number for each ECU (Electronic Control Unit)
  • OTA update history: Last over-the-air update timestamp
  • Known issues list: Documented bugs for deployed software versions
  • Version correlation: Did recent update occur before crash?
  • Boot logs: System initialization and version verification
  • Rollback history: Any automatic software reversions

Key Question: Did a recent software update introduce system instability?

Tesla pushes over-the-air updates constantly, sometimes changing vehicle behavior overnight. If a software update occurred days or weeks before the crash, it could have introduced:

  • Communication protocol changes causing inter-system failures
  • Processing load increases overwhelming marginal hardware
  • New bugs in control algorithms
  • Timing changes affecting real-time safety systems

The software version at time of crash is as critical as the hardware configuration.

8. Complete Network Architecture Performance

Tesla’s Ethernet backbone carries gigabytes per second of critical data:

  • All camera feeds (multiple cameras at high resolution)
  • Autopilot inter-process communication
  • Sensor fusion data streams
  • High-bandwidth system diagnostics

Required network data:

  • Packet loss rates: Network reliability under load
  • Bandwidth utilization: Was network saturated?
  • Switch port statistics: Network device performance
  • Quality of Service (QoS) violations: Priority traffic delays
  • Retransmission rates: Communication reliability
  • Network topology health: Which devices were communicating

Key Question: Did network congestion cause simultaneous camera and Autopilot failures?

If the Ethernet network experienced congestion or packet loss at 03:06:02, this could explain why multiple systems failed simultaneously during the high-demand turn maneuver. Network saturation is a common-cause failure that affects multiple systems at once.

9. Complete Thermal Management Data

  • Battery pack temperatures: Cell-level thermal distribution
  • Inverter temperatures: MOSFET junction temps and heatsink performance
  • Computer/processor temperatures: Autopilot and control computers
  • Cooling system status: Pump speeds, coolant flow rates, fan operation
  • Thermal throttling events: System performance reduction due to heat
  • Ambient temperature: Environmental conditions at time of crash

Key Question: Did thermal issues cause cascading system failures?

Thermal problems can trigger multiple simultaneous failures:

  • Overheated computers reduce processing capacity or crash entirely
  • Hot MOSFETs accelerate thermal runaway in recalled inverters
  • Battery thermal limits trigger power delivery restrictions
  • Cooling system failure cascades across multiple subsystems sharing thermal management

The timing of failures—progressive degradation from 03:02:02 through final failure at 03:06:02—is consistent with thermal cascade failure patterns.

Timeline of Defects as Reconstructed

03:02:02 - "Autopilot Not Available" begins
           ↓
           What signals changed at this moment?
           - Communication bus errors spike?
           - CPU load increase?
           - Sensor validity flags change?
           - System attempting to switch modes?

03:04:26 - Rear camera records people on street
           ↓
           This is LAST confirmed camera recording
           Establish "system healthy" baseline
           Are other cameras still working?

03:06:02 - Turn onto Hampton Road + rear camera stops
           ↓
           Simultaneous events:
           - Steering input increases (turning)
           - Camera recording ceases
           - Processing load spike?
           - Storage system error?
           - Communication errors?

03:06:02-03:06:54 - The missing 52 seconds
           ↓
           - Speed profile through residential street
           - Steering inputs vs. vehicle response
           - Any system warnings to driver?
           - Driver attempting corrections?
           - Why no effective braking?

03:06:54 - Impact

What to look for: Correlation between system failures

If at 03:06:02 you see:

  • Camera recording stops
  • Steering command rate increases (the turn)
  • Processing load spikes
  • Communication error rate increases

Hypothesis: System overload during high-demand maneuver.

Engineering question: Is the computing architecture adequate for worst-case scenarios, or did Tesla ship a system that fails when you need it most?

Spotting Incomplete Data Production

Can We Tell When Tesla Hides Data?

1. Time Coverage Gaps

  • They give you 5 seconds before crash
  • You need: 03:02:02 (when “Autopilot Not Available” started) through crash
  • Why it matters: System failures develop over time. The 4-minute gap shows progressive degradation.

2. Missing System Categories

  • You get: speed, throttle, brake, steering wheel angle
  • You don’t get: inter-system communication, fault codes, actuator responses, sensor validity
  • Why it matters: Can’t determine if systems functioned correctly without seeing their internal states.

3. Sampling Rate Inadequacy

  • They give you: 1 Hz (one sample per second)
  • You need: 10-100 Hz for control systems, 1000 Hz for critical safety
  • Why it matters: Crashes happen in milliseconds. 1 Hz data misses everything important.

4. Missing DBC File

  • They give you: decoded subset they chose
  • You need: complete DBC database file so you can decode everything
  • Why it matters: They’re choosing what you see. That’s not discovery, that’s curation.

5. Incomplete Signal Definitions

  • They show: “Stability Control: Active”
  • You need: WHY it activated, WHAT intervention it attempted, HOW vehicle responded
  • Why it matters: “Active” tells you nothing about whether it worked correctly.

6. Missing Entire CAN Buses

Modern Teslas have 3-5 separate CAN buses plus Ethernet. If they only produce “Powertrain CAN” data, you’re missing:

  • Chassis CAN (steering, brakes, stability)
  • Body CAN (could show power issues)
  • Diagnostic CAN (fault codes)
  • Ethernet (camera/Autopilot data)

What to Do About It

Retain an automotive systems engineer:

  • Experienced with CAN bus forensic analysis
  • Uses tools like Intrepid R2DB or Vector CANalyzer
  • Has testified in automotive defect cases
  • Can articulate exactly what signals are missing
  • Can compare Tesla’s production to industry standards

Not a mechanical engineer. Not a general accident expert. You need someone who lives in vehicle control systems and software.

What This Looks Like in Practice

The Piedmont Case Timeline

Based on what we know:

03:02:02 – System shows “Autopilot State Not Available”

  • This is 4 minutes and 52 seconds before crash
  • Something failed here
  • Tesla’s subset probably starts at 03:06:49 (5 seconds before impact)
  • You’re missing the 4:47 that shows how it fell apart

03:04:26 – Camera records people on street

  • Last confirmed recording
  • Shows system was still partially functional
  • Establishes baseline 24 seconds before camera stops

03:06:02 – Turn onto Hampton Road + camera stops

  • Steering demand increases (making the turn)
  • Recording ceases simultaneously
  • This is not random
  • 52 seconds of no data before impact

03:06:54 – Impact

What Complete Data Would Show

With full CAN data and DBC, you could determine:

At 03:02:02 when “Autopilot Not Available” began:

  • What communication failed
  • What error codes were logged
  • What system attempted recovery
  • Whether other systems were affected
  • Processing load before and after
  • Communication bus error rates

During the 4:47 gap (03:02:02-03:06:49):

  • Progressive system degradation
  • Sensor validity changes
  • Communication health trends
  • Whether driver received warnings
  • System recovery attempts

At 03:06:02 when camera stopped:

  • All simultaneous system events
  • Processing load spike?
  • Storage system failure?
  • Power fluctuation?
  • Communication breakdown?

During the fatal 52 seconds:

  • Steering inputs vs. actual wheel angles
  • Torque commands vs. inverter output
  • Brake system response
  • Stability control intervention (if any)
  • Why no effective speed reduction
  • System warnings to driver

Without this data, Tesla denies everyone else what they can see.

Industry Comparison: How Other Investigations Work

Aviation (NTSB Protocol)

After a crash, investigators get:

  • Complete flight data recorder (hundreds of parameters)
  • Complete cockpit voice recorder
  • Complete maintenance logs
  • Complete software versions
  • Manufacturer required to provide engineering support
  • Manufacturer provides documentation for all systems

Nobody says: “The FDR records too much data. We’ll just give you altitude and airspeed.”

Automotive (NHTSA Protocol)

NHTSA’s Office of Defects Investigation routinely:

  • Obtains complete CAN bus logs
  • Receives complete DBC files under confidentiality
  • Gets engineering support from manufacturers
  • Allows independent analysis
  • Compares fleet data for pattern identification

This is standard practice. Tesla knows this. They’re choosing not to comply.

Tesla the No True Scotsman

Tesla argues their vehicles are “computers with wheels” to generate buzz around their Autopilot and FSD.

Then when crashes happen, suddenly they’re just cars and computer data is proprietary and private.

You can’t have it both ways.

If it’s a computer-controlled vehicle, then computer data is crash data. And the “huge amounts of data makes Tesla successful” has to be directly relevant to “data shows why Tesla failed”.

Hold the Line on Tesla

Questions They Must Answer

1. System Communication:

  • What is the complete communication architecture?
  • Which systems share buses/networks?
  • What is normal message rate for each critical system?
  • Were any communication errors logged?
  • What are the failure modes when communication degrades?

2. Temporal Correlation:

  • Timeline of all system state changes
  • Correlation between “Autopilot Not Available” and other anomalies
  • Why camera stopped when steering demand increased
  • Progressive vs. sudden failure pattern

3. Control System Response:

  • Commanded vs. actual comparison for all actuators
  • Latency measurements
  • Fault detection and response times
  • Sensor validity and fusion

4. Failure Mode Analysis:

  • What failures could cause observed symptoms?
  • What does complete failure tree look like?
  • Which scenarios can be ruled out and why?
  • Which scenarios require additional data to evaluate?

If Tesla doesn’t address these fundamentals, their reports are worthless.

Burden of Proof When Manufacturer Controls Evidence

When a manufacturer exclusively controls critical evidence and produces only a curated subset, courts may:

  1. Draw adverse inferences: Assume undisclosed data would harm defendant’s case
  2. Shift burden of proof: Require manufacturer to prove system functioned correctly
  3. Allow spoliation instructions: Tell jury the missing evidence would have supported plaintiff

Tesla’s selective production—providing pre-filtered summaries while retaining complete logs—meets the legal standard for adverse inference.

Federal regulation 49 CFR Part 563 already establishes that EDR data must be accessible for investigation. Tesla cannot claim trade secret protection for data required by federal safety regulations.

Courts have consistently held that safety-critical system behavior data is not proprietary when lives are at stake. Plaintiffs agree to appropriate protective orders for genuinely proprietary information, but crash causation data must be produced.

If Tesla claims complete data would exonerate them, they must produce it. They cannot hide exculpatory evidence while claiming it’s proprietary.
The specific data requests are driven by:

  1. Known inverter recall: NHTSA recall for MOSFET defects creating sudden unintended acceleration risk
  2. Documented system failure: “Autopilot State Not Available” for 4 minutes 52 seconds before crash—abnormally long duration
  3. Correlated camera failure: Recording stops during turn at 03:06:02, exactly when system demand peaks
  4. Steer-by-wire system: No mechanical backup if electronic steering fails—ISO 26262 requires complete failure mode data for safety-critical systems
  5. Industry standards: SAE J2728, J2980, and ISO 26262 require this data for root cause analysis in sudden unintended acceleration investigations

The Bottom Line

Tesla has all the data. They recorded it. They have the tools to decode it. They have the expertise to analyze it. They should have the obligation to save lives.

They’re choosing to run and hide.

Why? Because complete data would show:

  • System failures they don’t want to admit
  • Design defects they don’t want to fix
  • Patterns across the fleet they don’t want revealed

Don’t accept the subset. Don’t let Tesla choose what the public can see in order to protect the public from Tesla.

Standards References:

  • SAE J1939 (CAN for vehicles)
  • SAE J2728 (Event Data Recorder – EDR)
  • SAE J2980 (EDR requirements)
  • SAE J3061 (Cybersecurity for Cyber-Physical Vehicle Systems)
  • ISO 21434 (Vehicle Cybersecurity engineering)
  • ISO 26262 (functional safety)
  • ISO 11898 (CAN specification)
  • FMVSS 126 (Electronic Stability Control)
  • ISO 16750 (Environmental conditions and testing for electrical equipment)

The Piedmont victim families, as well as many others, deserve answers. The complete data exists, the experts are ready to review it. Let’s see if Tesla CAN produce the right stuff for once.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.