First, Israel has confirmed using drone swarms in combat.
…in mid-May, the Israel Defense Forces (IDF) used a swarm of small drones to locate, identify and attack Hamas militants. This is thought to be the first time a drone swarm has been used in combat.
Second, a June 14th drone swarm in Shanghai suddenly fell apart and dozens crashed, causing injury and damage.
Source: “Dozens of drones on the Bund in Shanghai accidentally fall and hurt people?”, Kanzhaji.com
And speaking of loitering munitions, a third news story confirms the US Marines are adopting Israeli technology.
Manufactured by the Israeli company UVision Air, the system has been selected after the completion of several successful demonstrations, tests, and evaluation processes. The system will provide the Marines Corps with ISR, highly accurate and precision indirect fire strike capabilities.
Sand is a fluid such that driving on it can be hard (pun not intended) even for humans.
It’s like driving on snow or mud, yet it seems to be far less well studied by car manufacturers because of how infrequent it may be for their customer base.
Source: Simulator Game Mods “Summer Forest”. Snow and mud computer driving virtual environments can easily be found, yet sand simulations are notably absent.
Traction control, for example, is a product designed for “slippery” conditions. That usually means winter conditions, or rain on pavement, where brakes are applied by an “intelligent” algorithm detecting wheel spin.
In sand there is always going to be some manner of wheel spin, causing a computer to go crazy and do the opposite of help. Applying brakes, let alone repeatedly, is about the worst thing you can do in sand.
On top of that the computer regulation of tire pressure sensors has no concept of “float” profile required for sand. When the usual algorithm equates around 40psi to safe driving, deflating to a necessary 18psi can turn a dashboard into a disco ball.
The problem is product manufacturers treat core safety competencies as nice to have features, instead of required. And by the time they get around to developing core competencies for safety, they over-specialize and market them into expensive festishized “Rubicon” and “Racing Design” options (let alone “WordPress“).
In other words core complex or dangerous scenarios must be learned for any primary path to be safe, yet they often get put onto a backlog for driverless. Such a low bar of competency means driverless technology is far, far below even basic human skill.
Imagine it like exception handling cases or negative testing being seen as unnecessary because driverless cars are expected only to operate in the most perfect world. In other words why even install brakes or suspension if traveling parallel to all other traffic at same rate of speed, like a giant herd? Or an even better example, why design brakes for a car if the vast majority of time people don’t have to deal with a stop sign?
Recently I put a new car with the latest driverless technology to the test with dry sand. I was not surprised when it became very easily confused and stuck, and it reminded me of the poem “Dans l’interminable” by Paul Verlaine (1844 – 1896).
Dans l’interminable
Ennui de la plaine,
La neige incertaine
Luit comme du sable.
Le ciel est de cuivre
Sans lueur aucune.
On croirait voir vivre
Et mourir la lune.
Comme des nuées
Flottent gris les chênes
Des forêts prochaines
Parmi les buées.
Le ciel est de cuivre
Sans lueur aucune.
On croirait vivre
Et mourir la lune.
Corneilles poussives,
Et vous, les loups maigres,
Par ces bises aigres
Quoi donc vous arrive?
Dans l’interminable
Ennui de la plaine
La neige incertaine
Luit comme du sable…
The Ford Pinto engineering design flaws are infamous, thus it has been the car most associated with preventable fire risks until… TESLA (updated July 2nd):
The driver, identified as an “executive entrepreneur”, was initially not able to get out of the car because its electronic door system failed, prompting the driver to “use force to push it open,” Mark Geragos, of Geragos & Geragos, said on Friday. The car continued to move for about 35 feet to 40 feet (11 to 12 meters) before turning into a “fireball” in a residential area near the owner’s Pennsylvania home. “It was a harrowing and horrifying experience,” Geragos said. “This is a brand new model… We are doing an investigation. We are calling for the S Plaid to be grounded, not to be on the road until we get to the bottom of this,” he said.
Hot off the desk of the un-professional PR department at Tesla is the related important story that their cars have a serious acceleration bug forcing a massive recall:
The remote online software ‘recall’ — a first for Tesla cars built in China — covers 249,855 China-made Model 3 and Model Y cars, and 35,665 imported Model 3 sedans.
The 300,000 cars being flagged for a critical safety failure are at risk of sudden acceleration due to problems with Tesla’s self-proclaimed “autopilot” software.
Yes, you read that right, the safety recall is because the very product feature that was supposed to make these cars safer is actually making them more dangerous.
An even deeper read to this story is that Tesla is pushing software updates to cars using an allegedly insecure supply chain. Given that the bug appeared in the first place, what is to prevent an even worse bug from being deployed to cars on the road at any time and in any place?
While some obviously want to celebrate the ability to remotely deploy update code, it may be wishful thinking to believe the update will not make things worse (Tesla’s 2.0 “autopilot” was infamously worse at safety than its 1.0 release).
Indeed, the “Plaid” model in flames above is using a “new version” of the battery for the S/X, which obviously is unsafe.
Tesla seems to regularly exhibit deploying bad code (the official insurance rating now has a “P” for poor safety in Tesla engineering) and pushing the cost of its own failure onto others.
Source: IIHS Ratings
Also worth mentioning is that Tesla’s PR system has been promoting acceleration as its top feature at the very same time that acceleration issues (coupled with handling and braking issues) are being cited in recent deaths of its customers.
This reads to me like Ford promoting the heating capabilities of its Pinto while its customers are dying in gasoline fires from preventable design defects.
Killed in a Ford Pinto: 23 (estimated to be much higher)
Should a company be responsible for integrity failures in its supply-chain?
That’s the question that comes to my mind when I read the latest news:
Seafood experts have suggested Subway may not be to blame if its tuna is in fact not tuna. “I don’t think a sandwich place would intentionally mislabel,” Dave Rudie, president of Catalina Offshore Products, told the Times. “They’re buying a can of tuna that says ‘tuna’. If there’s any fraud in this case, it happened at the cannery.”
Whether the vendor “says tuna” on a label is such an odd thing to pin this case on, given the vast majority of such claims have been proven fraudulent for a decade now.
…59% of tuna is not only mislabeled but is almost entirely compromised of a fish once banned by the FDA. Sushi restaurants were the worst offenders by far [75%].
In other words is it still a form of fraud to not know or validate integrity of a source but to sell it anyway, especially when sources are known to have very low integrity?
a blog about the poetry of information security, since 1995