What the rise of Artificial Intelligence is going to do to your privacy

From this title, I think I’ve stolen my own wind. Furthermore, I am using the term Artificial Intelligence loosely because it looks a lot better than Predictive Analytics or Machine Learning, which I want to discuss in this column. Finally, one more disclaimer: since this paper’s fan base is located, among other places, in the San Francisco Peninsula that is now nicknamed Silicon Valley, I have to concede that some readers are going to know a lot more about this than I do and that my discussion here will insult their intelligence. Instead of begging their pardon, I implore them for commentary. There is a comment section to the online version of this article…

Let me start with the matter that has caused me to push aside the list of upcoming events I was going to publish on (one on September 9, the 3rd Conference on WWII in the Philippines where tickets can be purchased here https://www.eventbrite.com/e/3rd-conference-on-world-war-ii-in-the-philippines-the-legacy-of-two-nations-tickets-34923528213,is worth the parenthetical insert), and write with fury and fever. Have you ever noticed any of these companies or services that charge very little to sell you a service? Usually it is a service that is sold and not a product. That service will collect some information about you and you are, explicitly or not, through a Terms of Agreement (that you didn’t read) or a simple lack of legal protection, allowing them to keep that information, giving it to them, letting them study you, analyze you. These companies are well known names like Facebook, Amazon and Uber and less well known like 23 and Me.

23andMe provides DNA testing services where for $99, you can find out about your ancestry, and for $199, you can find out about your health and ancestry. You get some entertaining insight into your genetic background and they grow their data base of DNA samples. With the rise of things like Predictive Analytics and Machine Learning, which has enabled more accurate predictions of human consumer behavior, health, and more, has come the rise in value of all things data. You never would have guessed that companies are making loads of money trafficking in nuggets about us that they get practically for free. Well, of course you’ve guessed—you live in Silicon Valley.

Here’s an issue you may not have considered. These private companies like 23andMe will likely get purchased by a Google, a Facebook, or an Amazon. These companies, with their monstrous market valuations that are comparable to the GDP of the Philippines, are the aggregators of datasets, datasets about us, the public. With the large amounts of data of various kinds gathered under a single roof and the power today to make use of that information, individual companies will be blessed with the ability to know things about us that we do not know ourselves. It is not predicting my consumer behavior that concerns me. I expect this kind of creepy obsession with how I spend. It’s the other stuff, things I can’t think of, that scares me. When three giant companies have bought up all the datasets, will they know how long I will live (can this be predicted from my genetic code?) or where I will be tomorrow or, even creepier, where I am taking my kids today and tomorrow and next week, their ages, birthdays, friends, and hobbies. And what if an unfriendly entity like, say, Russia or North Korea, acquires a controlling interest in one of these public companies? Will all of our data belong to Vlad or that crazy missile guy with the bad haircut? What if they don’t even pay for the privilege…what if one of these companies/data repositories gets hacked?

Someone I know expressed concern that the rise of AI would lead to a violation of our civil liberties. Is it that simple? Is it a civil liberty violation if another government is invading our privacy?

There is some legislative protection that someone needs to draw up out of that intellectual vacuum called Congress. But it won’t come fast enough. Data sets need to be ring-fenced, especially medical data and all information related to minors. That’s a first step before attacking the real problem of stopping that data from ending up all in one place.