My argument back then was nobody should trust OpenAI (or Microsoft) when they promise data is safe in ChatGPT.
Nope. NOT safe.
…Microsoft reportedly suspended use of ChatGPT internally a few days ago.
Or more to the point, ChatGPT seemed to be a lying machine of constant integrity breaches, not unlike Altman’s other dubious ventures.
A CEO of a software company allowing constant state of data integrity breaches is no accident; that’s a management decision. It’s like a financial company that can’t balance its books, or a payment card processor with constant privacy breaches.
Apparently my assessment of trust was closer to the truth than even I realized because OpenAI just fired their CEO, citing an inability to believe him.
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
Will they fire ChatGPT next?
But seriously, an interesting footnote is the board also says they forced their chair to step down but didn’t fire him. That message hints at a conspiracy without sufficient evidence to hold the chair as accountable as Altman.