Expressing concern over the “highly unusual circumstance” of [Judge] Mershon “confronting her directly and privately,” Eckerle sided with the prosecution, which contended that Mershon “altered” the victim’s memory, “and by using judicial coercion and intimidation, that he overcame her, causing her to claim falsely that she had lied (at) trial.”
When that didn’t work, because courts rejected false claims and suspected coercion, the judge somehow convinced the unpopular governor with only a few days left in office to pardon sex crimes. The judge continued to play a partial role by personally celebrating with the accused.
Mershon picked up Hurt at the state prison in La Grange on Friday and drove him to his mother’s house, the retired judge told The Courier Journal on Monday.
A child abuse expert weighed in the same story and correctly concluded this all an obvious abuse of the justice system.
Pamela Darnall, president of Family & Children’s Place, a regional child advocacy center that evaluates and treats children for sexual abuse, said she was shocked by the circumstances of the pardon. In general, children do not lie about sexual abuse, she said. “The research continually bears out that the majority of kids are not making it up,” she said. Darnall said she was disturbed that Mershon later sought out the victim, which the prosecution argued caused her to change her story. “These are people in power. This is a judge,” Darnall said. “This is what kids deal with when people who are the adults … pressure these kids.” Nor could Darnall understand Bevin’s willingness to pardon Hurt. “A leader steps in and says I simply believe it wasn’t true so I’m going to pardon him,” she said. “What kind of message does that kind of behavior send to our kids and send to adults who have lived with their secrets for so many years?”
In related news at the end of the story, the governor says murder with a vehicle is not murder.
Jerry Thompson was killed in 2014 when his vehicle was struck by a car driven by Wibbels. The governor wrote that Wibbles “was involved in a tragic accident and has been incarcerated as a result of his conviction for wanton murder. This was not a murder.”
Wibbels was driving in traffic such that he allegedly used an emergency shoulder to illegally pass when he struck Thompson’s vehicle head-on.
This kind of manipulation process should be familiar to some as canon of social engineering, with many books already written on the subject.
It also is well documented through the tragic history of the developing world, cruelly manipulated during the Cold War to foment coups and drive power towards dictators who would serve some narrowly-defined agenda (instead of allowing representative democracy). Chad, Guatemala, Angola, Mozambique, Iran…the list I’ve written about on this site alone is long.
The White House unilateral and un-American move to pardon war criminals shows how power is being manipulated by foreign military intelligence campaigns leveraging bias, much in the same way developing nations were manipulated during the past 70 years.
Rolling Stone explains succinctly how a modern system of malicious social media is being used:
Russia’s goals are to further widen existing divisions in the American public and decrease our faith and trust in institutions that help maintain a strong democracy. If we focus only on the past or future, we will not be prepared for the present. It’s not about election 2016 or 2020.
This is spot on. Militarized information campaigns push bias every day to build power slowly in order to wield at a moment’s notice, which Rolling Stone refers to as emotional drive:
She wasn’t selling her audience a candidate or a position — she was selling an emotion. Melanie was selling disgust. The Russians know that, in political warfare, disgust is a more powerful tool than anger. Anger drives people to the polls; disgust drives countries apart.
Pardoning war criminals thus does three things for the current White House by generating disgust:
Demonstrates bias towards “supreme leader” who can do whatever he wants, whenever he wants, regardless of law. This generates disgust among those who believe in the rule of law, such as the Constitution. Also this negates any commentary about war-crimes being committed in Syria after American forces retreated. It’s a negation of both domestic and international moral code.
Demonstrates bias towards the “Christian warrior” who can do whatever he wants, whenever he wants, regardless of law. This generates disgust among those who believe in the rule of law, such as the military code. Soldiers pay attention to example and failing to hold bad examples accountable generates dissent in ranks.
By establishing these two bright lines of disgust on social media and elsewhere it slowly helps identify the extremists in America happy to obey a dictator. We see two national tests of loyalty based on emotive-based bias. Those disgusted by such obvious violations of laws are classified as disloyal to dictatorship and abruptly pushed out in favor of servile minds that give an ok to overtly destroying democratic concepts like the Constitution.
To make a finer point on this, some American military leaders are convinced that mutually assured destruction (MAD) kept the world free of war, while others realized there have been many wars despite MAD with untold suffering and the UN primarily has served to prevent escalations. This used to be a minor point of division worth debating.
By fueling bias, military agents have turned that division into a massive fissure where people are disgusted by the opposing side; either rule of law is respected (e.g. a UN Convention on the Rights of the Child is signed easily) or laws get ignored because might is said to make right (e.g. abuse of children gets called an inherent right of parenting).
Already we see people defending the White House by saying their dictator can do no wrong as they consider the current occupant “strong” and therefore above all laws. They’ll follow his orders to abuse anyone even the most vulnerable populations unable to defend selves.
To be fair the supporters of the current White House don’t necessarily like its occupant as much as the theory of “strong man” power that political scientists used to refer to as fascism. The support is driven by disgust with representative democracy, which means there is desire for dictatorship where a small cabal of power can even dispose of the current bumbling occupant and his family.
There even could be simmering intent to soon install a new and more competent/healthy dictator via secret police (typical role for those who commit war crimes) in order to better achieve some narrowly-defined self-serving agenda (e.g. national socialism, where a very small group gets defined as being an elite nation to absorb all benefits away from much larger state populations).
Here’s how Mussolini himself described it in a text he was credited with in 1932 (“La dottrina del fascismo”, an essay written by Giovanni Gentile):
Fascism attacks socialism first, then tries to destroy all of democracy. In a reverse theory, Mussolini here is giving away the gatekeeper/antidote. He apparently believed if people helped genuine socialist candidates and causes they were holding back the slide to dictatorship.
At that time the bias technique was against “bolshevism” and for “pacifism”. Hearst (and the Koch family) were far less successful than today’s Zuckerberg, however, and the pro-fascism leader in America failed to get elected President that year.
Here you can see why the 1932 Presidential election was so critical to the rejection of fascism; rejection of the “strong man” propaganda spreading at that time
An Allied victory in 1945 clearly cuts fascism lines short. WWII soldiers from America destroyed the Axis forces, which had defeated socialism and trained their guns on all of democracy. This victory restored faith in laws and institutions (e.g. establishment of the UN) and meant the US was able to even export concepts of teamwork and respectful collaboration (lean out) onto occupied fascist countries. In that sense, Germany and Japan have become something of time-capsules for the values of the US that made it so successful.
Here’s how seasoned leaders have described the current White House attacking military values and authority, which appears to most as a mad man throwing away America’s democratic legacy to replace it with the disgusting ideas of fascism:
To put the US back on track and reverse the White House, these pardons for war crimes need to not disgust and divide the nation. And that seems unlikely given how fascist tactics are intended to disgust anyone who really believes in rule of law, let alone gave an oath to uphold the Constitution. A bully push towards divisiveness and away from law, as a disgusting test of loyalty, is exactly why the White House pardoned the accused.
See also: “2016 Republican Candidate: Fascist Week 2016”
As a little child I once got a ride to school from a neighbor who had a Subaru 4×4 that could go where school buses were failing (another time our bus was rescued from a ditch by a Korean-war 6×6 but that’s a story for another day).
Her tiny white car slowly crawled in low range over big prairie snow drifts and up the icy dirt hills. She softly patted the dash with her heavily bundled hand and yelled “COME ON BESSIE” above the roar of a little EA82 boxer engine that could.
It has been so many years, I wonder did she put her Bessie down and was it cruel when she did it? That’s the kind of question being asked by MIT in a new article asking if pressing an “off” button is equivalent to a machine murder. Maybe that’s the wrong question entirely, since they can be turned on again? Are you god if you can switch a robot on?
Here’s a particularly funny part where a “roboticist” notices that humans in high-risk/controlled environments like to name things and minimize changes.
Julie Carpenter, a roboticist in San Francisco has written about bomb disposal soldiers who form strong attachments to their robots, naming them and even sleeping curled up next to them in their Humvees. “I know soldiers have written to military robot manufacturers requesting they fix and return the same robot because it’s part of their team,” she says.
Should we accept this as some kind of exception as opposed to a norm? Who doesn’t name things or keep them close, even ones we don’t mind turning off?
Here’s a thought. Sleeping with a machine preserves integrity and reduces cost of trust. Returning the same one helps maintain integrity too, as every machine tends to have particulars.
I’d challenge this roboticist to put such behavior in historic context of soldiers and their machines for the past 100 years. And despite my “Bessie” experience, I’d say we trend more towards machines as extensions of our bodies, and not really companion-like.
In fact the old Japanese theory suggests we are less likely to anthropomorphize robots that appear the most human-like. We might be most comfortable turning them off due to what they called the “uncanny valley“.
Attachment seems to come more from extension of our functional needs, which makes sense especially for bomb disposal risks, and helps explain the reasoning behind shooting victorious horses after battle has ended.
Of all the times I held my named laptop (because of course it has a name) in my arms, even sleeping next to it, nobody ever wrote about this as some kind of attachment. And I’d say they probably didn’t need to.
In fact I’d guess the percentage of security pros who keep their systems close and avoid rotations is near 100% but why call that a study subject?
There’s a part of a new decision that I keep re-rereading, just to make sure I read it right:
As a passcode is necessarily memorized, one cannot reveal a passcode without revealing the contents of one’s mind.
I mean that’s just not true. The old joke about people putting sticky-notes with passcodes on their monitor is because sometimes they are too hard to memorize. The reason NIST backed off complexity requirements and rotations is because passcodes turned out to be too hard to memorize and people were storing them unsafely.
We all recommend password managers and using unique passwords for every site, which is all too hard to memorize. The entire password market doesn’t believe passwords are necessarily memorized.
And then there’s the simple fact that passcode sharing often uses communication channels that rely on storage other than the human mind.
Also beyond being wrong that sentence seems unnecessary to the decision. If this case didn’t have a password written down, despite an accused saying he use one 64 characters long, then it becomes an exception. The fact remains passcodes very often are stored outside the human mind.
The rest of the decision is not terribly surprising
…the compelled production of the computer’s password demands the recall of the contents of Appellant’s mind, and the act of production carries with it the implied factual assertions that will be used to incriminate him. Thus, we hold that compelling Appellant to reveal a password to a computer is testimonial in nature.
…military historian Len Fullenkamp reflects on the importance of immersing oneself in the minds of strategic leaders facing dynamic and complex situations. One tool is the staff ride, an opportunity to walk a battlefield and understand the strategic perspective of the leaders…
I’ve walked countless battlefields and tried to relive the decisions of the time. One of the most unforgettable was a trench line perfectly preserved even to this day on a ridge that held off waves of attacks for several sleepless days.
On another long-gone battle ground I stumbled upon three live bullets that had been abandoned for decades, slowly rusting into the ground atop a lookout. I held them in my hand and stared across the dusty exposed road below for what seemed like hours.
Yet I rarely if ever have seen a similar opportunity in the field of security I practice most today. Has anyone developed a “staff ride” for some of the most notorious disasters in security leadership such as Equifax, Target, Facebook…? That seems useful.
In this podcast the speaker covers the disastrous Pickett’s charge by pro-slaveholder forces in America. After two-days investment the bumbling General Lee miscalculated and ordered thousands of men to their death in what he afterwards described plainly as “had I known…I would have tried something different”.
Fullenkamp then goes from this into a long exploration of risk management until he describes leadership training on how to make good decisions under pressure:
What is hard is making decisions in the absence of facts.
Who could be the Fullenkamp of information security, taking corporate groups to our battlefields for leadership training?
Also I have to point out Fullenkamp repeats some false history, as he strangely pulls in a tangent about how General Grant felt about alcohol. Such false claims about Grant have been widely discredited, yet it sounds like Fullenkamp is making poor decisions with an absence of facts.
Accusations of alcoholism were a smear and propaganda campaign, as historians today have been trying to explain. For example:
Grant never drank when it might imperil his army. […] Grant, in a letter to his wife, Julia, swore that at Shiloh, he was “sober as a deacon no matter what was said to the contrary.”
After Grant’s death, exaggerated stories about his drinking became ingrained in American culture.
First, the truth of charges against Grant are related to America’s pre-Civil War political and military patronage system (corruption basically) being unkind to him. He succeeded in spite of them and he was living proof of someone using the past to better understand the present.
After extensive experience fighting in all major battles of the Mexican-American War he didn’t sit well being idle and under-utilized. He was introverted and critical of low performing peers. A superior officer in California used minor charges of alcohol as a means to exercise blunt authority over the brilliant Grant.
Second, it was KKK propaganda campaigns of prohibition that pushed the false idea that Grant’s dispute with his superior was some kind of wild and exaggerated issue relevant to prohibition.
In fact history tells us how pro-slavery Generals literally became so drunk during battles they disappeared and were useless, every single time they fought. The KKK projected those real alcoholic events from pro-slavery leadership onto Grant to obscure their own failed history and try to destroy his name.
Apparently it worked because it’s 2019 and far past time for people to stop repeating shallow KKK propaganda about America’s greatest General and one of the greatest Presidents.
Your model must predict an output PNG image with height and width of 1024 pixels, where each pixel value corresponds to the predicted class at that place in the input image:
0: no building
1: undamaged building
2: building with minor damage
3: building with major damage
4: destroyed building
This makes for very interesting games played at the superficial level with paint or loose objects.
For example could computer vision distinguish easily the sort of building under perpetual construction (as often is the case in developing nations) from one that has been damaged? Or in a similar vein, if buildings are maintained poorly in markets that lack readily available materials, what constitutes minor damage?
Cloudflare traced the problem to a regional ISP in Pennsylvania that accidentally advertised to the rest of the internet that the best available routes to Cloudflare were through their small network. This caused a massive volume of global traffic to the ISP, which overwhelmed their limited capacity and so halted Cloudfare’s access to the rest of the internet. As Cloudflare remarked, it was the internet equivalent of routing an entire freeway through a neighbourhood street.
Funny thing, historically speaking, is that the Internet is based on transportation logistics. The first hackers were literally people who disobeyed road conventions and safety and went their own way for selfish reasons.
The answer for why a regional ISP in Pennsylvania wanted all the traffic to flow through their neighborhood is…still being discussed. Some speculate it was a test. Someone somewhere always is thinking about how to do Internet-scale attacks on confidentiality, integrity and/or availability.
While many Americans criticized China relentlessly for engineering a social-ranking system, MIT has proudly announced it’s working to improve prediction of car behavior with…a social-ranking system:
“We’ve developed a system that integrates tools from social psychology into the decision-making and control of autonomous vehicles,” Wilko Schwarting, a research assistant at MIT CSAIL, told Digital Trends. “It is able to estimate the behavior of drivers with respect to how selfish or selfless a particular driver appears to be. The system’s ability to estimate drivers’ so-called ‘Social Value Orientation’ allows it to better predict what human drivers will do and is therefore able to driver safer.”
What do you expect your social value to be in the classification system run by MIT? Here’s a historic reference to help:
Nowhere in the story do I see mention of the system being restricted to cars only, or prohibited from leaking into feudal-style governance. Thus, any car on the road could be running social-ranking on everyone and everything it sees all the time.
It will be interesting to see if any American news outlets pick this up as creepy social-ranking of populations, applying the same worry as they did with China.
Also it will be interesting to see how this expands into everything that can be seen at street level by cars.
Is your lawn mowed? Maybe you are being selfish by cutting that grass and denying wildlife a habitat. Or maybe you are being selfish by allowing a habitat to grow that naturally pollinates your neighbor’s lawn, who is trying hard to mow constantly and deny wildlife a habitat.
Who decides the rankings? Or more importantly, if we get transparency into the systems of rankings, will people build a level of trust by playing into expected behavior and then abuse that ill-gotten trust to switch behaviors and escape prediction?
By the end of the Edo period, Omi merchants, and their cousins in other areas of Japan, had grown immensely wealthy. They were also uniquely situated because they had far-flung interests which brought them into contact with the political and economic changes of the last decades of the period. They often proved quick to anticipate the changes and take advantage of them and became influential in the modern period of capitalist development.
It’s not only humans at risk of being ranked. Roads will be constantly assessed, judged and discussed by the things operating on them, according to a tire manufacturer
…data on current road conditions and aquaplanning risks can be sent to other cars in the nearby area, also via 5G
Imagine sending out false autoplane risk data in your neighborhood to slow traffic instead of having to pour on the road an expensive speedbump (sleeping policeman).
And things won’t be just judging what they see eye-level or at the surface, either. Last year a ground-penetrating radar company said cars soon could be judging what it finds under a road surface as well.
…system will scan up to 10 feet below the surface of the road in order to lock on to stable ground. It can then use this data, combined with data from the vehicle’s other onboard sensors, to build itself a map of subterranean features which it can then use to maintain its position on the road…
Putting it all together, combining scores of things around a car and under a car means the car can take action with a better sense of what to predict. However, the integrity of that data is nowhere near proven reliable, and may even open up whole new markets of manipulation for those trying to invert social pyramids.
A Harvard man walks into a wildlife protection demo and an AI system made by Intel labels him a poacher. His reaction is fascinating. He criticizes machines in a way that seems just as fitting for humans. Would he have reacted the same to a human labeling him in this manner? Even more interesting is the man labeled a poacher is from an institution (Harvard) that has been known to perpetuate injustices like poaching.
This incident begs the question of whether we should expect human intelligence to be criticized as often or vocally as machine intelligence. I mean is it right to expect more of machines than humans in this scenario? I’d like to explore with this post whether humans of “Harvard intelligence” could be expected to pass the bar set by Harvard for a machine of “Artificial intelligence”.
In other words what if people who graduate from Harvard, who claim to be intelligent, exhibited the same or worse behavior as a machine labeling people poachers in the wildlife protection demo?
POACHER: A poacher is generally defined as someone who unfairly or dishonestly takes and uses something for themselves when it belongs to someone else.
HARVARD: Harvard is generally defined as a school with a tarnished legacy that today remains affiliated with white men in positions of power who display very questionable ethics (Pompeo, Kobach, Zuckerberg…). Here’s a perfect example:
Harvard University is profiting from of one of the earliest known photographs of an enslaved man, despite requests by his descendants to stop doing so, the man’s great-great-great-granddaughter says in a lawsuit…
A little deeper inquiry into that lawsuit reveals that Harvard was heavily invested in perpetuating white supremacy doctrines even after the US Civil War forcefully decided blacks should no longer have their bodies taken unfairly or dishonestly for use by white men.
In 1865, just as emancipation was being secured in the United States, [Harvard professor] Agassiz had more than a hundred photos taken of nude African-descended Brazilians to build support for white supremacy and polygenesis. With slavery in the United States ended, Agassiz’s work became even more critical: In a moment when America’s future regarding race was highly malleable, building a scientific foundation to support continued white supremacy was even more of an imperative.
Harvard has been extremely slow not only to address its racist and unethical foundations, which supported unfair and dishonest practices, it should concern everyone the number of white supremacists even today who have Harvard degrees. Shouldn’t they fail tests of intelligence?
INTELLIGENCE: Intelligence is defined here with Gottfredson’s perspective that it relates to a broad and deep capability for comprehending surroundings, such as making sense or figuring out what to do. For example, what should Harvard do when asked to stop unfairly or dishonestly taking and using something for themselves?
Example of Harvard “intelligence”
Kris Kobach of Kansas (KKK) is a good example as he earned a BA degree in Government in 1988, earning distinction for being top student in his department. We also should include Kobach’s adviser (trainer, if you will), the director of Harvard’s Center for International Affairs, Professor Samuel P. Huntington.
Huntington infamously taught Kobach nativist doctrine such as how to block non-white participation in government. One of the crazy theories was that people of Central and South America who enter the US pose an existential threat to the “American identity.”
Mexican intellectual Enrique Krauze described Huntington’s method as a “crude civilizational approach.” Carlos Fuentes called Huntington “profoundly racist and also profoundly ignorant” and accused him of adopting the favored fascist tactic of creating a generalized fear of “the other.” Henry Cisneros noted that Professor Huntington was “hand-wringing over the tainting of Anglo-Protestant bloodlines.” Andres Oppenheimer of Miami called Huntington’s work “pseudo-academic xenophobic rubbish” and called for national protests against Harvard University and publisher Simon & Schuster. Even those sympathetic to Huntington’s anxiety about Mexican immigration stood their distance. Alan Wolfe said that at times Huntington’s writing bordered on hysteria, and that he appeared to be endorsing white nativism. The editors of the British magazine The Economist questioned Huntington’s notion of Anglo Protestant culture, noting that it had been “a long time since the Mayflower.”
Kobach earned top honors in government theory in the late 1980s, and trained under this obviously racist and xenophobic adviser. Can you can guess, based on world political events at that time, what came next?
In 1990 (given the fall of South Africa’s apartheid was still four years away) Kobach published a pro-apartheid book titled “Political Capital: The Motives, Tactics, and Goals of Politicized Businesses in South Africa” (University Press of America).
Kobach wrote about a white police state as good for business. He seemed to think beating down non-white populations (those seeking equal rights with white police) was how to push wealth into white hands just as a matter of “peace keeping”.
Technically speaking he wrote “strict Verwoerdian apartheid enforced with an iron fist can be seen as a route to a more stable South Africa”. You can see it even on page 28 from his Harvard thesis:
After graduation and publication of pro-apartheid screed Kobach then embarked on a life quest “to enact a nativist agenda, often from within the government.”
In other words, intelligence doesn’t seem like the right word to describe a top student from Harvard. He did the wrong things over and over. What if a machine made these same mistakes? He literallyu made a career out of falsely labeling humans and declaring them a threat based on completely debunked white supremacist theories of species preservation (nativism).
Harvard criticism of Artificial “intelligence”
Fast forward to today’s debate on AI ethics and we have a Harvard man saying an “intelligent” system has unfairly labeled him a poacher, much to his astonishment.
Hey, did that system read history and know he was from Harvard, an institution known for its unauthorized appropriations? No.
Does looking at someone’s training environment, and probability of learning selfish supremacy doctrines, seem like a good way to find people who favor poaching? Maybe.
Those ideas are far more complicated as learning models than what actually happened. The label of poacher turns out to be very easily explained.
First, Kudos certainly go to Latonero for speaking out from within the horribly tarnished halls of Harvard.
His article does seem a little overly “why me” and primarily concerned for his own welfare, yet it makes a fair point that he doesn’t understand the authority or perspective of the system labeling him.
Walking through the faux flora and sounds of the savannah, I emerged in front of a digital screen displaying a choppy video of my trek. The AI system had detected my movements and captured digital photos of my face, framed by a rectangle with the label “poacher” highlighted in red. […] I couldn’t help but wonder: What if this happened to me in the wild? Would local authorities come to arrest me now that I had been labeled a criminal? How would I prove my innocence against the AI? Was the false positive a result of a tool like facial recognition, notoriously bad with darker skin tones, or was it something else about me? Is everyone a poacher in the eyes of Intel’s computer vision?
Second, at no point does he say, for example, 35,000 poached elephants is a catastrophe worthy of solving. Is there a case to be made for labeling ever? Perhaps this is one place where simple labels make sense, as a piece of a puzzle that trends towards more sophisticated answers and broader actions.
Those deaths are approaching extinction level threats, and the elephants are in natural prisons where no human should be…hold that thought.
Latonero gets a good and clear answer to his question and just brushes it off as insufficient.
When I reached out to the head of Intel’s AI for Good program for comment, I was told that the “poacher” label I received at the TrailGuard installation was in error—the public demonstration didn’t match the reality. The real AI system, Intel assured me, only detects humans or vehicles in the vicinity of endangered elephants and leaves it to the park rangers to identify them as poachers.
There we go. Intel clearly says a simplistic algorithm is looking for humans within a space that is authorized only for animals. When a human enters the space they are labeled a poacher because they do not have authorization, and it is assumed they entered unfairly or dishonestly.
I can understand Latonero was shocked to be labeled “unauthorized”. He probably wouldn’t have thought twice if the screen said that, or even just said “human”, instead of making the logical connection to unauthorized access being a poacher.
Walking around at a “MIT conference on emerging AI technology” he felt entitled to enter a space and approach the sensor. He did not appreciate being told his actions were a violation and linked to extinction-level threats.
It sounds perhaps like what a Mexican immigrant to Texas (a state forcibly taken from Mexico) might feel when being labelled by Kobach as a violation and an extinction-level threat.
Using the Harvard critique of intelligent systems to assess Harvard graduates
Ok, now imagine Kobach is that AI system that Latonero walks up to. Let’s say Latonero is an American migrating into the US. Kobach would then label Latonero a threat and…nothing seems to happen. Am I right here?
I don’t see any Harvard ethics experts lining up to warn us of the “intelligent” people emerging from Harvard training who use simplistic and dangerous labels to harm society.
Again, I can give kudos to a Harvard expert calling attention to simplistic labeling and calling it less than intelligent, yet I have to point out his warnings would be far more appropriate to issue a take-down on Kobach and ban him from any authority or office.
Graduates of Harvard who perpetuate its awful past and poaching ways are far worse than the AI system that Latonero is warning about.
We should fix both humans and machine, and by comparison we have easy solutions ready for the latter…but the real question here is whether an AI system designed to protect humanity from poachers would be seen as accurate if it labels Kobach as existential threat to society.
After all, a Harvard affiliation really could get classified as probable poacher
And on that note the parallels are closer than you might realize:
…Kris Kobach is having a tough time finding support for a plan that would allow the [2012 Kansas] governor to distribute 12 big-game hunting permits at his discretion.
In other words Kobach literally tried to pass a law to bypass wildlife safety authorities, which would shield himself/associates from being labeled a poacher. He could literally hand out a sort of get-out-of-jail card, the sort of thing the KKK were famous for using during prohibition to limit alcohol to whites only.
Kobach’s failure to pass a self-entitlement bill led to this embrace in early 2016 with an infamous elephant killer:
Kansas Secretary of State Kris Kobach sports an orange hunting cap, a long gun and a wide grin as he stands alongside the president’s son and 20 dead pheasants.
…Trump announced that the lifting of the ban [on import of dead elephant] was on hold, pending further review. In a follow-up tweet, he went on to say he’d “be very hard pressed to change my mind that this horror show in any way helps conservation of Elephants or any other animal.”
Hopefully this post has helped explain that Harvard makes the best case yet that Harvard should be criticizing Harvard more.
Once outfitted with the technology, service leaders headed through the woods and into a building on a mission to test out the IVAS HUD’s ability to recognise and register faces, pull up maps, and translate foreign characters. Leaders also demoed the integration of the Family of Weapon Sights – Individual (FWS-I) – a thermal sensor mounted on M4 Carbine and M249 Squad Automatic Weapon that provides users with infrared imagery in all weather and lightning conditions – into the IVAS HUD when they faced ‘enemy combatants’.
Imagine Kelly’s Heroes talking today about a AR hack that made hostile forces see teddy bears coming at them…it must be way past time for a remake of that classic movie.