Cooperation Instead of Competition: How to Win Peace Through Wars

I love a new article by War on the Rocks about “grass roots” engagement because its heart is in the exact right place, yet much of the history and analysis seems off-base.

The following sentence is a giant clue to what this topic is really about:

…fear of losing such expensive equipment induced risk aversion among decision-makers and prevented them from being released…

It reminded me very much of how ill-prepared the US was marching Civil-War style into Spanish American War, and what saved the day. Few Americans remember but July 2nd 1898 the 24th and 25th Colored Infantry rescued the Rough Riders at San Juan Hill.

‘If it hadn’t been for the black cavalry, the Rough Riders would have been exterminated.’ Five black soldiers of the 10th Cavalry received the Medal of Honor and 25 other black soldiers were awarded the Certificate of Merit.

I’ve written about this before also in terms of WWI, where an innovative Beersheba battle victory attributed to British deception operations and a charge of their black cavalry had a decisive effect on the overall war.

And on that note it KILLS me to read in the War on the Rocks article something like this:

The ‘do-it-yourself’ ethos has evolved from hobbyist clubs that were dedicating to building personal computers back in the 1970s…

No. Go back much, much earlier.

It was self-sufficiency and becoming a “made man” (e.g. General Grant was genius at hard-working innovations) that drove Union forces to defeat rigid-thinking Southern Confederacy of slaveholders in Civil War.

American innovation is greatly hampered by inability to leverage diverse thinking that is readily available. When talking about “risk aversion” we need to be honest, describing it in terms more illustrative of the problem such as racism or sexism.

It also is hampered by a lack of teaching history, which illustrates how innovations have best come from integrating, in other words learning to compete together instead of against each other. Victory is achievable to those who collaborate better.

Have You Seen the Revolution in Geospatial Intelligence?

Companies operating constant satellite surveillance systems seem to be struggling to find a market for their imagery.

BlackSky can take images of the same area several times a day… Iceye can take images regardless of weather and lighting conditions… with analytics for capabilities like change detection…

In the old days it cost millions to acquire reliable imagery. It was difficult and secretive, which obviously are ingredients that don’t make for commercial success.

The intelligence market tended to be driven by life-saving high-stakes operations that nobody could talk about. In one of my big data talks, for example, I describe how fax machines in an African jungle sending geographical details became essential to a daring hostage rescue.

It’s kind of like the healthcare market, where intelligence was driven by surgery to save lives. There’s a transition to sensors everywhere all the time, which has become a market exercise in privacy worry and even national security risks.

Now the companies with geospatial intellignece technology want to realize some kind of commercialization and profits, but it’s not clear (pun not intended) yet what kind of mass market would ever want spy tech. Climate activism? Disaster response? Surveillance, especially as a form of power transfer and political action, is usually something a very small group obsess about.

Another way of putting this is emergent militant extremists benefit most from the commercialization of technology, as they rapidly adopt it into asymmetric conflict.

The Islamic State fighters, through their purchase of commercial drones, however, at times had better reconnaissance capabilities than the Armed Forces of the Philippines, a key U.S. ally. In the words of one Filipino army ranger, “The Islamic State militants are better armed, with high-powered weapons, night vision goggles, the latest sniper scopes and surveillance drones.” The tactical drones necessary to provide similar awareness to Philippines troops existed but were sometimes underutilized. Fully employing these American-provided drones, such as the RQ-20 Puma, would have delivered better tactical reconnaissance for the Filipino forces, but their cost and scarcity ensured that the control of these systems was often retained at higher command levels. Further, the fear of losing such expensive equipment induced risk aversion among decision-makers and prevented them from being released for some missions, resulting in operational units often being disadvantaged against their Islamic State opponents. A cheap, capable drone — designed to basic military specification and made widely available to tactical units — would have made the battle for Marawi much easier for Philippines security forces.

Tesla v10 FSD Update is Yet Another Safety Disaster

I’ve seen a stream of complaints about the latest release of Tesla’s “Full Self Driving” software (FSD), version 10.

Here’s a perfect example where the car starts driving on the wrong side of the road, directly towards collision with an oncoming car, and the driver says “WHERE ARE YOU GOING” as he strains to pull the Tesla away.

DRIVING ON THE WRONG SIDE OF THE ROAD INTO ONCOMING TRAFFIC. Source: YouTube

That’s without question a head-on collision attempted on a public road by the latest version of “Full Self Driving”.

Here’s one getting a lot of views on YouTube where the car surges suddenly towards pedestrians as if to kill them, exactly as I’ve predicted in my security conference presentations since at least 2016.

In another example from a different driver, the car throws a “take over immediately” alarm with almost no time to react when a pedestrian crosses in a crosswalk.

ALMOST HITTING A PEDESTRIAN IN A CROSSWALK. Source: YouTube

You can see the pedestrian look at a Tesla coming and start running to avoid being hit by it.

This driver is so saccharin about safety failures he brags that very nearly hitting a pedestrian is just a nice data point for Tesla to learn from.

Likewise, he stops in a bike lane instead of the proper car turn lane and says it’s a big improvement over earlier versions where Tesla (correctly) did not enter the protected bike lane.

Note first he starts to approach the left turn lane all the way on the left, next to the green protected bike lane.

Source: YouTube

This is very specific marking, to allow bikes to avoid the danger of vehicles turning left in front of them.

The Tesla then illegally stops on the green bike lane, creating a safety hazard, instead of the dedicated left turn lane.

STOPPING INSIDE PROTECTED GREEN BIKE LANE. Source: Tesla

That is NOT an improvement. And it comes just after the Tesla tried to accelerate and rush in front of a car to the right, before suddenly turning left instead and then wandering across lanes.

The car is failing the most basic tests.

In another video a driver in San Jose tries to cross railroad tracks and says “whoa, ok, 9.2 had that nailed down, not sure why 10 can’t do that”.

He is saying he takes the same turn repeatedly as a test and 10 is much worse… but it gets even more worse as his car tries to drive directly into road closed and do not enter signs.

IGNORES GIANT SAFETY WARNING SIGNS IN PATH. Source: YouTube

Ignoring giant safety warnings is a long-time problem but it clearly gets worse in version 10 as you can see in this retrospective:

If that doesn’t alarm you, here’s an example from another driver who puts her Tesla behind a car on a single-lane narrow road and has to stop it repeatedly from behaving dangerously.

The video complains dryly that v10 is “very confused” and “freaked out” and “unsure”… all words that are exactly the opposite of the “confidence” marketing from Tesla.

TRYING TO DRIVE AROUND CARS ON SINGLE LANE ONE-WAY STREET. Source: YouTube

That driver says there were too many errors to list them all, but perhaps the most telling was when she grabs the wheel to stop a head-on collision and says “NOT GOING TO LET MY CAR TURN INTO TRAFFIC”.

Likewise, and finally, here’s a whole series of problems in one video of running red lights, trying to pull out in front of oncoming cars, turning right on a left turn, trying to drive around cars waiting at a red light, swerving off the road… and just like the example at the start of this blog DRIVING ON THE WRONG SIDE OF THE ROAD INTO ONCOMING TRAFFIC.

DRIVING ON THE WRONG SIDE OF THE ROAD INTO ONCOMING TRAFFIC. Source: YouTube

“THE SPEED LIMIT IS NOT 30” this driver also yells at his car excitedly in a slow zone as tries to regain control to prevent an accident.

Watch the whole video here:

This all comes after the CEO of Tesla said v10 was being delayed a week for safety reasons.

Tesla was preparing to release the FSD Beta V10 on September 3, but today announced that it was slightly delayed. On September 2, the CEO of the company Elon Musk tweeted that the release of the new version of the software will be on Friday, September 10. He explained that the first thing the team needs to do is make sure the update works well and is safe for testers to use.

So is it safe as promised? Should these random “testers” be held liable when they cause accidents or is it the fault of Tesla for encouraging dangerous behavior?

Let’s now go back to when Elon Musk in April 2019 (allegedly to juice investors for more money) gave a very public prediction that by 2020 he would deliver a national fleet of taxis with no human driver at all, such that even human controls would be eliminated by 2021.

Robo-taxi customers will be able to summon participating cars “from [their] parking lots” using a bespoke mobile app, Musk said during a presentation to investors this afternoon, and “get in and go for a drive.” He expects the network will have as many as a million cars in the next year and a half. Musk predicts that two years from now, Tesla will produce cars without steering wheels or pedals…

Well, what can we say now in 2021?

Tesla is a scam.

In many of the above cases the driver falsely rationalizes putting themselves and others in harms way because they believe the car is “learning”. They have no proof of learning and in many videos you see the driver saying the exact opposite yet ignoring their own observations because they believe in a lie.

Tesla is complicit in encouraging unsafe driving on a false principle, creating guinea pigs out of their customers with no actual proof of learning going back to their believers.

In fact, Tesla recently openly admitted they aren’t learning and aren’t listening to their customers.

…we haven’t done too much continuous learning. We train the system once, fine tune it a few times and that sort of goes into the car. We need something stable that we can evaluate extensively and then we think that that is good and that goes into cars. So we don’t do too much learning on the spot or continuous learning…

Let me say that again.

Tesla has stated openly in their most recent presentation they aren’t learning “on the spot or continuous” because that would be hard. Yet all these drivers put themselves and others in harms way on the very belief that Tesla is learning on the spot and continuously.

Tesla is a scam.

Tesla wanted to replace the entire battery for a total cost of $22,500. The Kelly Blue Book value of the used Tesla was about $23,000. After some research, Hoover was able to get the Tesla repaired by an independent shop for about $5,000, or 75 percent cheaper than what Tesla offered.

Maybe this is a good time to revisit in 1957 we all were promised driverless cars (to arrive by 1975).

PASSING the above sign as you enter the superhighway, you reach over to your dashboard and push the button marked “Electronic Drive.” Selecting your lane, you settle back to enjoy the ride as your car adjusts itself to the prescribed speed. You may prefer to read or carry on a conversation with your passengers or even to catch up on your office work. It makes no difference for the next several hundred miles as far as the driving is concerned.
Fantastic? Not at all. The first long step toward this automatic highway of the future was successfully illustrated by RCA and the State of Nebraska on October 10, 1957, on a 400 -foot strip of public highway on the outskirts of Lincoln.

At this rate, driverless will need another 500 years (while electric cars are an entirely different story). Perhaps the best way of describing the path being taken is: instead of manual or automatic Tesla now builds an “autocratic” car (imposition of one’s will on others in an insistent or arrogant manner).

RayBan “Incel” Edition: The Latest Case for Facebook Ban

Facebook is a very invasive platform that fails at transparency. Now they’re promoting a product that’s designed to be invasive without transparency — spy glasses. What is the best defense against such nonsense?

The simple answer is a ban.

Bans on sunglasses already were a good idea to consider if you care about transparency and trust.

Unlike their American counterparts, UK soldiers removed their shades, enabling them to make eye contact with locals and build trust.

It’s really not that far-fetched to enact such a ban. The British police already banned sunglasses during duty to help protect and serve public interest.

…unless medically prescribed, they must otherwise ‘be removed when contact is made with members of public’.

In the given military and police instances, there’s an underlying implication that glasses, as a form of technology, have the potential to disrupt power dynamics in social situations. When glasses are discreetly integrated with surveillance capabilities, it raises crucial questions about permission and consent.

Now, focusing on the Facebook example, it stands out as a scenario marked by extremely poor choices in both security and product management.

Perhaps one of the most disturbing angles to this story is how Facebook claims to have “engineered” their best “alert” for a camera on someone’s face being in surveillance mode, yet it’s a tiny indistinguishable pinpoint of light easily removed with sticker or paint.

Can you even find it below?

Source: BBC

When the lens is more prominent than the thing designed to alert you that there is a lens, you know it’s totally backwards thinking.

They even made the logo far bigger than the tiny indicator of privacy risk.

The BBC also points out how Facebook arguably took a 2016 Snapchat design and made a far worse version, switching to being spy design — less obvious about being a giant surveillance tool.

Source: BBC

What’s that you say? A different shape of the frame? Facebook has that covered too.

Source: Facebook

See how obviously worse their design was, while copying someone else’s? And they even copied the product name. This is how Snapchat announced their product in 2016.

You can also export Snaps and Stories, and share them outside of Snapchat on almost any platform you like! Note: At the moment, you can only export Stories that have 20 Snaps or less.

So Facebook called theirs “Stories” too. Either Snapchat is the silent OEM to this or there’s an egregious lack of morals and talent at Facebook, or both.

The story format, originated and made famous by Snapchat, has been on Facebook’s radar for some time, with the Menlo Park-based company first testing a Snapchat Stories clone within Messenger in September 2016.

The real and significant difference, therefore, is that Facebook took an idea for a fun/frivolous and obvious spy camera with bright distinctive coloring/marking to denote a camera and… redesigned it just to eliminate any possibility of consent from a victim.

Be honest now, can you tell whether “alert lights” are turned on here?

Source: RayBan

RayBan always had a bright contrast marker on its face, so whoever thought it made sense a bright contrast marker in the same spot would be a reasonable “alert” … is either a design idiot or being intentionally evil.

Source: Facebook. The CEO and founder got his start by collecting photos of women without consent and using those photos to intentionally harm women with online exposures inviting public ridicule and shame.

It’s like some Facebook executive said “hey guys, listen I used to dox women in college and abuse them for profits by uploading videos of them without consent so let’s make some ‘cool-guy incel’ glasses to enable domestic terrorism”… and then CEO Zuckerberg replied with “ME TOO, LET’S DO IT!”

Sadly, I’m not really exaggerating here. Another disturbing angle to this product is exactly this kind of discussion found on the “Incel” (misogynist domestic terror cell) wiki:

Giovanni became angry at Cho for not taking off his sunglasses… Cho filmed Giovanni, possibly to expose her contemptuous behavior towards Cho. However, a few female classmates complained that he was filming them. One female said that he was filming her legs. Cho was kicked out of class due to the complaints…

Stalking was legal in the USA until the 90’s. The first state to criminalize stalking in the United States was California in 1990 as a result of numerous high-profile stalking cases in California, including the 1982 attempted murder of actress Theresa Saldana, the 1988 massacre by Richard Farley, the 1989 murder of actress Rebecca Schaeffer, and five Orange County stalking murders, also in 1989.

Who was this Cho? Seung-Hui Cho was the 2007 mass murderer at Virginia Tech College.

And so who wanted these glasses? Cho.

To put it bluntly, Facebook may as well have named this product after Cho as it fits his narrative exactly.

About 80 per cent of the victims in spy cam cases are female, while the overwhelming majority of perpetrators are male. In 2016, for instance, 98 per cent of perpetrators in spy cam cases were men.

Facebook is so historically bad at facilitating rampant abuse of people, including mass atrocities while repeatedly dismissing experts and consent, I’m actually surprised they added some useless pin prick of an “alert” light at all!

Credit goes to a lawyer, apparently:

Himel told BuzzFeed News that the LED light was a feature they ADDED after consulting with a handful of privacy groups… [Facebook funded lawyer] Greenberg said there is a real privacy risk to bystanders, and there hasn’t been a product exactly like this before…

How could Greenberg say that?! Come on, everyone knows this is a rehash of ideas decades old.

Here’s another comparison to what intentional spy glasses have looked like for MANY years.

Source: Spy glasses vendor

Spies and investigators who would buy such spy glasses are in theory well aware how recording someone without consent is an intentional breach of personal security (invoking professionalism, authorization and serious legal questions).

Joking around with authority roles should not be too easily confused with actual professional use:

“Papers please!”

Joking aside, let me be clear here.

Facebook has TERRIBLE user experience engineering. This is them pushing the future of privacy into darkness of unethical power transfers, where they promote wealthy elites taking away others’ privacy as a form of status.

Here is how Wall Street reported it.

WSJ’s Joanna Stern tested them, and they looked so normal, very few people knew she was recording.

Not good.

The ENTIRE face of the glasses could illuminate to give people a very clear indication the cameras are on, yet they went the exact opposite direction.

Source: Not Facebook

Their tiny pin-prick concept is a privacy disgrace, a consent disaster. The design is a complete failure of security and transparency fundamentals.

It’s anti-privacy, and it deserves a total and complete ban, which RayBan brought upon themselves by getting in bed with this “Incel” oriented product for abuse of the public.

Maybe call it a RayBanBan to prevent an erosion of trust and safety by the unapologetic predators running Facebook, yet ultimately it’s Facebook that deserves the ban.

The bottom line is this:

As a nation, we will never be secure if we don’t value girls’ security.

Combined with this:

An undercover investigation by BBC News Arabic has found that domestic workers are being illegally bought and sold online in a booming black market. Some of the trade has been carried out on Facebook-owned Instagram, where posts have been promoted via algorithm-boosted hashtags… “This is the quintessential example of modern slavery,” said Ms Bhoola [UN special rapporteur on contemporary forms of slavery]. “Here we see a child being sold and traded like chattel, like a piece of property.”.