U.S. Ambassador to the United Nations Nikki Haley said in a December interview with a small group of columnists in New York that one of the largest threats facing America was cyber warfare.
A handful of House members suggested in a September letter to Director of National Intelligence Dan Coats that “deepfakes” — videos that use artificial intelligence to develop realistic images of individuals — were one of America’s biggest emerging vulnerabilities.
And over the last year or so, a group of the world’s preeminent scientists and researchers, 26 in total, banded together to write a report warning of the time when artificial intelligence would surpass human performance and basically, take over much of what humanity currently does — driving, writing, warring, even creating. That was their way of saying that one of America’s most serious concerns was the potential for total technological dominance.
The commonality?
All these sources suggest having some level of inside knowledge of the biggest tech-related security threats facing America. Another commonality? They’re all wrong in their assessments.
Truly, the biggest technology threat facing America is the unsuspecting, unknowing, unaware, perhaps too-trusting nature of the American people.
After a year of closely following and writing about all-things-artificial intelligence; after months upon months of investigating the perils, pitfalls and positives of emerging technology; after performing daily research, sending out scores of email inquiries, conducting dozens of interviews of what’s coming down the pike in terms of a computerized, always-connected society; and after witnessing time after time, up front and first-hand, the stonewalling, sidestepping, shunning or less-than-forthcoming responses, reactions and reflections of many from the scientific, research and even political communities — the better conclusion is that the biggest Big Tech threat facing America is the failure of Americans to recognize the threats Big Tech brings.
Sadly, gloomily, this threat is not so much borne of an inability or unwillingness of Americans to educate themselves, as it is from a purposeful intent on the part of the Big Techers to downplay, skew, spin, disregard, dismiss and even outright disguise and deceive us about the darker truths of emerging technology.
The medical industry, for one, has not been forthright with what it takes to forge the type of breakthroughs A.I. advocates promise will come. Here’s an example: In mid-December, Globe Newswire sent out a release about OWKIN, an outfit established in 2016 to find practical applications for technology in health care, that simply crowed and glowed about the company’s creation of “the world’s largest AI-powered medical research network.”
This network, the release stated, was comprised of “more than 30 prestigious hospitals and research institutions across the US and Europe” and would, once fully operational, help medical professionals better treat health problems that have occurred, predict health problems before they occur and even, ultimately, stop health problems from ever occurring.
Great. But at what cost? The release, of course, didn’t say. And therein lies the problem.
Whenever the medical community talks about using A.I. for health breakthroughs, what they’re really talking about is taking the private, personal medical records of Jane and Joe Q. Public, feeding them into a database, and using the information to determine common and shared attributes that can then be used to spit out diagnoses. What they’re really talking about is letting machinery, not mankind, ultimately make medical decisions for doctors, insurers and hospitals.
Docs don’t like to say; neither do the companies with profits tied to making A.I.-medical gains, or the politicians who invest heavily in these outfits. But technology in health care is not the golden ticket its advocates would have believed. Rarely are patient privacies uttered in the same breath as A.I-medical announcements.
Hacking attacks are one thing; in mid-2017, Accenture reported 13 percent of patients in England discovered their private healthcare data had been stolen from their technology-dependent doctors’ offices. But it’s the more insidious attacks on patient privacies, patient rights-to-decide, that are the stuff of real shock and awe.
“Facebook sent a doctor on a secret mission to ask hospitals to share patient data,” CNBC reported in April of 2018. “[The company] asked several major U.S. hospitals to share anonymized data about their patients, such as illnesses and prescription info, for a proposed research project. Facebook was intending to match it up with user data it had collected, and help the hospitals figure out which patients might need special care or treatment.”
Where’s the press release on that bit of A.I.-tied health industry news?
Far too often, the public only hears of the glories of technology, and not the realities.
Far too often, headlines cheer, for instance, the convenience of Alexa — but leave out the spying and recording that’s dinged the devices; news outlets applaud the crime-fighting abilities of facial recognition technology — but omit the data collection that’s taking place on innocent citizens, and fail to point out the contextual applications of the Fourth and Fifth amendments; publications wax approvingly about the convenience of human chipping — but ignore the questions of who gets that data, who keeps that data, who supposedly safeguards that data.
Far too often, when media outlets do try and go beyond the glitz, and contact the pertinent scientific sources for deeper explanation, they’re told things like this — as I was, when following up on a story about a robot dubbed “Fabio” that was “fired” from its grocery greeting job for scaring customers: “Sadly, I am instructed not to comment on this story.”
That was from “Fabio” affiliated researcher Oliver Lemon, a professor of artificial intelligence at Heriot-Watt University in Edinburgh, in response to my email inquiry in January about the widely reported robot story.
How’s a person supposed to take that?
The fact is, absent knowledge, proper decisions cannot be made. Consumers, citizens, cannot tell what’s good versus what’s bad — cannot determine if the ends justify the means.
In 1933, when this country was in the throes of great change, economically, socially, militarily and otherwise, Franklin D. Roosevelt offered in his inaugural address these small words of comfort: “The only thing we have to fear is fear itself.”
If technology is the newest great change, economically, socially, militarily and otherwise, well then, perhaps more apt for our modern times would be this: The only thing we have to fear is the unawareness of what to fear.
Mark these words: Ignorance about the full truths of A.I. will one day be America’s downfall.
First appeared at The Washington Times.
Cheryl Chumley, you are amazing! Keep writing, and keep being a RockStar wife and Mom.
Tom