hckr.fyi // thoughts

🤖 + 🍺 Bots and Beer 0x11 - The Societal Impact of AI

by Michael Szul on

The Bots + Beer newsletter ran for about 3 years between 2017-2020 during a time when I was highly involved in chatbot and artificial intelligence development. It was eventually folded into Codepunk as part of the Codepunk newsletter.

Some portions of this newsletter were contributed by Bill Ahern.

The Societal Impact of AI

Many of the "ethics of AI" discussions exist in a vacuum. What I mean is that we often refer to something that AI does (or that we think AI will do in the future), and we generalize that context, attempting to place it in an almost utilitarian framework. We seem to immediately jump to the academic, and analyze ethical issues as a platonic ideal. We think of the lives affected as parameters in the equation, instead of adding an extra dimension to the problem. When we ask what AI means to the ethics of humankind, we look at the impact on the person, but what about the impact on society as a whole, in the context of our various cultural bubbles?

"Machine Learning has become alchemy." --Ali Rahimi

Ali Rahimi has argued that machine learning today has become much like alchemy. Our deep neural networks are poorly understood and filled with misleading or under-evolved theories, and yet while the media consistently emphasizes the fears and uses of artificial general intelligence (AGI)--something that is years, or decades away--our deep learning models are finding their ways into healthcare, criminal justice, and even credit scores.

A recent article in The Ethical Machine argues that these AGI-related conundrums, while intriguing conversation-starters, are mere distractions from the more pertinent, immediate concerns at hand--namely, those surrounding the development and implementation of artificial narrow intelligence (ANI).

ANI refers to AIs trained on specific datasets to perform a specific set of tasks; it’s use-cases are endless--social media feeds (e.g., Facebook, Instagram), suggested actions in smartphone applications (e.g, Skype, LinkedIn), music streaming services (e.g., Spotify), online shopping (e.g., Amazon)… virtually every digital service with which you interact has some form of integrated ANI. So instead of hypothesizing and disputing the problems humanity will face at some point in the future after the integration of AGI in society, perhaps, we should instead question the ethical implications of the existing technologies that power our daily lives in an integrated, diverse society?

When technology is applied to society, we often reach the inevitable compromise between social good and individual privacy, but the obsession with hypotheticals in a Jarvis-enable society drive the conversation away from the personal transgressions of machine learning today, and instead focus on the dreams of the future. Instead of worrying about HAL, what about the small design decisions that create rolling, growing waves in the fabric of society? What's missing in an algorithm designed by five white dudes from Connecticut that could be life-altering for Muslim women immigrants in Chicago? And what happens when we can't trace those AI decisions back to the source?

We are designing systems for scale based on data accumulated in mostly a single societal context, potentially amplifying the differences between cultures. In addition, AI hints at the individualist vs. collectivist divide in culture, law, and general society, and the need to talk about the prioritization requirements of AI when it comes to comparing the public good to that of the individual.

But while those in research, technology, and philosophy have questions that need answers, the over-hyped cycle of the news media (combined with wariness over backpropagation) could force disillusionment, and ultimately another AI winter. It was precisely this form of AI over-hype by companies and the media in the MIT/LISP Machine 1970s that led to unreasonable expectations of AI from the public. When these were not met, people decided AI had "failed," causing funding to dry up, resulting in a huge decline in AI innovation, interest, and research. With the challenges facing today's society--namely climate change and agricultural needs--we can ill-afford another misstep.

Editor(s) Note:

Notice a few changes? Bots + Beer has really taken off over the last few months, so we decided to invest a little love, make some minor style tweaks, and refine our process. Let us know what you think by replying to this email.

This issue features special guest contributor Avantika Mehra--a "cereal" entrepreneur, professional caffeinator, seasonal athlete, and student at the University of Virginia. She writes on Medium, and likes karate, sound design, and olives. She also tweets.

If this newsletter is about the "future of computing," then Avantika is one of those people creating that future.

Innocent Technology for the Sinister?

It's hard for me to back away from technology, and I often find myself arguing with those who want to revolt against the idea of data collection or large-scale profiling. We all want a Star Trek society, but in Star Trek Into Darkness, when Kirk uses a tablet to watch a video recording of a bombing, and he zooms and pans around the image… well, you don't get that level of detail without cameras everywhere. There's a trade-off to personalized shopping. There's a trade-off to having a highly efficient personal digital assistant that tells you which restaurants you're currently craving have tables available, and then calls you a self-driving call, which takes you on a familiar route because you like the scenery, while playing music that you enjoy without having to ask.

I find that these fears of a surveillance nation--a Big Brother state--and all of the personal invasion that we abhor is more a result of the misuse of technology. We have a choice of how to use technology, and unfortunately, many companies currently use it to take advantage of consumers, or in the case of Cambridge Analytica, manipulate an entire constituency and election.

Any technology can be misused and send up red flags. Amazon has been working diligently on getting its Alexa service to recognize accents, ethnic origin, gender, and other identifying characteristics--all meant to make the technology, and thus the services, better. The problem is that this same information when combined with purchase history, location via IP address, and other meta data will likely be extremely valuable to government and law enforcement agencies, not to mention consulting companies working in grayer areas.

Is it the technology's fault? Not likely. It's another example of needing to look in the mirror to build a better society.

Back to the Cars Again

Transportation is the linchpin of society, as society, culture, cities, and knowledge evolve and grow with the speed and convenience of travel. Some of that travel is digital (e.g., the Internet's information superhighway--I'm old, so that term is still relevant to me), while the travel that has advanced society the most is physical: Horses, cars, planes, better cars, hyperloops (maybe)--all of these things increase our ability to grow and communicate.

We've had the self-driving car debate ad nauseam, and I often tire of the repeated arguments. Who builds the moral machines of our society? MIT tried to answer some of these questions, but managed to instead show that answers greatly vary according to social standing and culture. Different answers could apply to different regions, and to those with different belief structures, calling into question any purely technical program.

The truth is that our algorithms will likely need to take into account societal principles--no matter how potentially outdated--in order to properly serve that segment of society.

WebAssembly

Web Development isn't normally synonymous with AI, but JavaScript has been the most penetrating programming language over the last decade--even making its way into machine learning. Will WebAssembly change all that once programmers can use any language on the web or in Electron apps?

Hardywood Bourbon Barrel Pumpkin

Hardywood Bourbon Barrel Pumpkin

Bill gave us a rundown of some of his pumpkin beer samples last issue, and now that we're into December, it's out with the pumpkin, and in with the stouts, where craft breweries tend to rely on milk stouts, gingerbread, warming beer, and occasionally some extra sweet dessert beer. I guess that makes this entry the last of the pumpkins, as I wanted to highlight Hardywood's Bourbon Barrel Pumpkin Farmhouse Ale.

Farmhouse Ales/saisons tend to have a variety of flavor profiles depending on how they are brewed, but the funkiness of this ale, combined with the bourbon storage, smooths things out. Despite an alcohol content over 10%, the flavor isn't harsh, and since saisons don't tend to be high sugar, the balance keeps you from having a bad morning. Generally, I like to stick to a variety weekend-to-weekend, especially during pumpkin season, but this was one that I had to pick up multiples of.

And if any of you want some Codepunk or Bots + Beer laptop stickers, reply to this email with your mailing address, and I'll send some out.