“Any sufficiently advanced technology is indistinguishable from a rigged demo.”— James Klass
The spectacular failure of blood-testing firm Theranos is the subject of a riveting book, Bad Blood by investigative reporter John Carreyrou, and an engaging documentary, “The Inventor” on HBO, focusing on Elizabeth Holmes, the once-celebrated wunderkind who dropped out of Stanford at age 19 to “change the world” with a device that would perform hundreds of diagnostic tests with a few drops of blood from a finger stick. It’s a story made for Hollywood (Jennifer Lawrence will play Holmes in the forthcoming movie), filled with lies, deception, threats and sex, set in a Silicon Valley startup.
Once valued at $9 billion, Theranos raised hundreds of millions from famous investors such as Rupert Murdoch, Betsy DeVos and the Walton family (owners of Walmart). It landed a corporate partnership with drugstore giant Walgreens, which built a series of “wellness centers” in its stores, where customers could order blood tests without a prescription. Due to a legal loophole, the Food and Drug Administration (FDA) hadn’t examined the Theranos device, called “Edison,” which was still just a prototype. But the show had to go on. Most blood tests had to be performed with a traditional syringe draw. As for the “droplet” tests, they were dangerously unreliable. The technology that made everyone so excited, it turned out, didn’t actually work. Theranos collapsed. Elizabeth Holmes now faces trial for criminal fraud.
Theranos’ initial success was not something that Holmes could have achieved on her own. She needed the cooperation of a supporting cast of prominent men (yes, they were all men) on her board, including such luminaries as former Secretaries of State Henry Kissinger and George Schultz, former Senator Sam Nunn and retired General James Mattis (who would go on to serve as Secretary of Defense in the Trump administration). None of them had backgrounds in medicine. Also serving on the board, and as the company’s lead lawyer, was David Boies, the trial lawyer who had represented Vice President Al Gore in his election case before the Supreme Court.
But the most important enabler of the Theranos con was not a human being. Instead, it was secrecy. According to the book and documentary, to keep investors and business partners in the dark about what was going on, Holmes used the excuse that the breakthrough invention had to be kept under the tightest possible wraps, lest competitors leap ahead. Her lawyers reinforced this notion, giving it enough credibility that Holmes could draw in otherwise rational people with the promise of a healthier society, a disrupted industry, and capital gains. This gave Holmes the comfort to actually fake demonstrations of the Edison: while important visitors were taken on tour, their blood sample was taken out of the machine and whisked to a downstairs lab where it was analyzed using commercially available equipment, with the results returned to the meeting room just in time.
Nondisclosure agreements were secured from everyone who came into contact with the company. And those agreements were enforced vigorously, apparently even using private investigators and threats of crushing litigation to keep knowledgeable employees from speaking with the press. (If you are interested in learning how lawyers can terrorize well-meaning whistleblowers, I urge you to read the book.)
Secrecy was apparently also used within the company, keeping employees “siloed” from other areas by an extraordinarily strict need-to-know policy. As a result, those who worked on running the machines didn’t know what the engineers might be doing to fix and improve them, and new development projects kept people guessing about whether the real breakthrough technology was being sharpened in the next room. All of this partitioning of knowledge was coupled with enthusiastic “us vs. them” speeches by Holmes designed to keep morale strong and faith alive.
Of course, the “dark side” of trade secrets—where the law enforcing confidentiality is used in unintended ways—isn’t unique to Theranos. Nondisclosure agreements have been accused (without much empirical evidence) of discouraging employees from moving to new jobs, for fear that they will inadvertently misuse some confidential information. More recently and notoriously, they have become part of the “#MeToo” conversation, as a mechanism for suppressing the truth by silencing victims of abuse.
But we have ways of preventing, or at least mitigating, these inappropriate consequences. Courts routinely exercise discretion to favor the free movement of employees from job to job. There are now strong whistleblower protections built into federal law for those who want to share with the authorities confidential information about potentially unlawful conduct.
Even the Theranos story doesn’t mean that trade secret law is inherently dangerous. Consider Apple, one of the world’s most secretive companies. (Holmes famously modeled her clothing and business habits after Steve Jobs.) Apple has consistently used NDAs and secrecy management to protect products under development, to great effect when they are ultimately unveiled, all without touting non-existent technology. And it’s easy to imagine how Theranos might never have happened if investors and business partners had been less credulous and more insistent to understand the technology. It is entirely possible to couple information security with appropriate governance and oversight; indeed, that is how most companies behave. More than any problem with trade secret law, the Theranos debacle is about greed, hubris and the overwhelming power of human denial when faced with inconvenient facts.
However, the Theranos story got me thinking about other aspects of secrecy and technology that pose stickier problems. The one that comes to mind is artificial intelligence (AI). As a concept, AI has been with us a long time, representing the evolution of powerful computing that we imagine might someday mimic the human brain. But only recently has it seemed on the relatively near horizon, with systems being deployed on information sharing platforms like Facebook, and, soon it seems, in our cars. It’s one thing to let Google protect its search engine; but we have seen how fake news can affect elections, and we wonder how computers will be able to make life-or-death decisions while driving themselves (and us) down the road.
A common public reaction to these concerns about personal-impact technology is to demand “transparency” of the companies that use AI in their tools. We want to know exactly what the algorithm is that determines our news feed, and we want visibility on what the car will do when faced with the choice of hitting the baby carriage or grandma. But here we run into a dilemma common to all forms of advanced technology: we need to encourage the innovation that gives us new products and services; but to enable the necessary investment of money and risk we need to guarantee secrecy so that the innovator can recoup its investment.
When as a society we faced a similar problem a century ago with an emerging technology with profound individual consequences, it was pharmaceuticals, and eventually we fashioned an approach that has worked fairly well to serve both private and public interests, in spite of the narrow loophole that Theranos exploited. Drug companies are required to reveal to the FDA their formulations and test data, where technically qualified officials examine the drug or device for efficacy and safety. All this is done behind closed doors, to protect the company’s investment in some very expensive and risky research. But because we have confidence in the ability of the agency to get it right, we are comfortable using the drugs that have been approved.
It’s not clear to me that a similar model would work to address the potential flaws in secret AI engines. How would we develop models for testing everything that could possibly go wrong? How could a government agency reliably make predictive judgments about software that operates in the world, rather than chemicals that operate in the human body? And even if those challenges could be overcome, what do we do about the fact that the AI algorithms, unlike drug formulations, are not static, but are built to dynamically alter themselves through machine learning?
I don’t have a good answer to these questions. Unlike the situation at Theranos, where the risk of harm from secrecy could have been met by some healthy skepticism and common sense, AI presents a uniquely difficult challenge to find the right balance of competing interests. We need to keep talking about it.