Ivermectin research is plagued with data integrity failures, raising an important question for security and privacy professionals: what better data control options are available?
The latest news seems right on track to demand interoperability from technology that facilitates more individually controlled patient data stores:
…calling for scientists to adopt a new standard for meta-analyses, where individual patient data, not just a summary of that data, is provided by scientists who conducted the original trials and subsequently collected for analysis”.
In other words using the Solid protocol would enable patients to participate in a consensual study by opening access to their data for research, while still allowing the highest possible integrity.
Saying “accuracy is still bad” is the defining security story of the 2010s and now 2020s as well… seriously holding back technology usefulness by undermining knowledge.
Integrity is lacking innovation and needs a complete new approach; it’s way too far behind where we are in terms of confidentiality and availability control engineering.
This paragraph caught my attention, as I’ve been trying to shift the discussion from surveillance to debt capitalism.
What does this algorithm-industrial complex look like and who is involved?
Perhaps our first glimpse of the catastrophic impact of algorithms on civil society was robodebt, deemed unlawful by the Federal Court in a blistering assessment describing it as a “massive failure in public administration” of Australia’s social security scheme.
A rather annoying conversation I had recently with a farmer in a southern U.S. state went something like this:
Him – i’m telling you that UFO are real
Me – in what sense is a UFO real?
Him – scientists admit flying objects can’t be identified
Me – you believe something observed may be open to interpretation? like a point of light could be Mars until we realize it’s Venus?
Him – right, therefore aliens are real
Hopefully you see the problem. There was no convincing him that aliens aren’t real, because there could be doubt — at any level — therefore everything could be doubted at every level such that anything imaginable was as good as real.
Think of his position like this. If a car approaching you at night is missing a headlight, you might wonder if it’s a car or a motorcycle. Yet this guy seeing a single light believes he is about to be the first to prove the existence of a one-eyed space monster. Probability?
Philosophers dispensed with such nonsense in the 1700s with empiricism, and certainly in the 1900s established logic and reasoning to guide our rational approach to the unknown. Popper’s work in particular is important using a falsification method.
Unfortunately, an allure of mysticism is strong especially during uncertain times such as domain shifts in technology that force people to deal with lots of unknowns (technology destroying some of their assumptions, like suddenly losing old routines of working from an office and commuting in a car everywhere).
It must be stressed that these women were not ignorant or ill-educated; nor were they socially or geographically isolated. They were dignified, sensible, experienced women, living in a middle-class suburb in a large city. Neither were they in any way eccentric;
on the contrary, they were pillars of their church and local community, essentially “respectable” in even the narrowest sense of that unpleasant term. Figures such as these do not at all give the impression that belief in supernatural cause and effect is declining.
It would seem that the world view of quite a substantial proportion of the population is probably decidedly less materialistic than scientists and historians imagine.
I’ll go one further. The celebrated Winchester House in Silicon Valley wasn’t about lack of education, and especially wasn’t about eccentricity, despite all appearances to the contrary.
Winchester was a foreshadowing of power and cognitive blindness, spending money into fraud, much in the same way that Silicon Valley today sees their “singularity” and “metaverse”. People are building a modern software version of Winchester’s infamous hardware “stairway into the ceiling” and “doors that open to a giant drop”.
I’ve written before about this and presented it many times in terms of Advanced Fee Fraud. The more I study the problem, the more widespread I’m finding it as a function of human susceptibility to social engineering.
Facebook announced very publicly it was deleting its trove of facial recognition data. Somehow this has been falsely reported as Facebook won’t use facial recognition.
Let me be very clear here: Facebook said it will continue using facial recognition.
The reports bury this fact so far down it’s highly suspicious. Why would all the headlines say Facebook has stopped using facial recognition while in fact carrying a buried lede like this one:
…the company signaled facial recognition technology may be used in its products in the future.
Future? That’s today. Facebook is literally saying they will continue to use facial recognition. Please everyone stop reporting this as an end to their use!
NO NO NO and NO
Even worse, Facebook tried to use specious safety reasons to argue that facial recognition has a notable upside.
Meta’s vice-president of artificial intelligence, Jerome Pesenti, said the technology had helped visually impaired and blind users.
Capturing faces was to help visually impaired and blind Facebook users? Come on. This is like someone saying at least fascism kept the trains running on time (it didn’t). What a way to throw its blind users under the bus. You think facial recognition is bad? Well now Facebook is telling you you’re a bad person because you must hate blind people.
Let me very clear here: Facebook is covering something up that is very bad.
Deleting all those Yann LeCunn developed templates and databases from sub-second facial scans has to be related to the fact that regulators are coming, and that internal documents are leaking.
Privacy watchdogs in Britain and Australia have opened a joint investigation into facial recognition company Clearview AI…
This is the actual headline we should be seeing for Facebook, not a bunch of puffery about it being the good guy for deleting data.
…following an investigation, Australia privacy regulator, the Office of the Australian Information Commissioner (OAIC), has found that the company breached citizens’ privacy. “The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” said OAIC privacy commissioner Angelene Falk in a press statement. “It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime…”
Facebook likely knows of a serious breach and ongoing misuse of its facial recognition data such that it’s covering up here (not unlike the massive coverup operation by Yahoo when it was breached).
Why else would their PR department be so odiously twisting news into “we’re shutting down facial recognition” while in the same breath saying “we’re still using facial recognition”.
Shame on journalists for reporting without doing a common sense check of the content.
Something very bad must have happened (after all, we’re talking about Facebook), and management seems to be pushing a very hot global coverup operation by manipulating the news cycles to get ahead of it.
Here’s a fascinating development in supply-chain security. Toyota is trying to prevent their vehicles from ending up being used by extremists (a very old problem) by creating an explicit agreement with their buyers.
With over 22,000 pre-orders for the new 300 Series, Japanese publication Creative311 reports that Toyota Japan now requires that an agreement be signed, confirming that any new Land Cruiser purchased will not be resold.
The other argument made is not about militants; allegedly Toyota also is trying to prevent artificial scarcity from hedge buyers soaking up supplies to spike prices.
Although, to be fair, financial extremism is still a form of extremism. Both forms of extremism could be seen as criminal or even terror.