hckr.fyi // thoughts

🤖 + 🍺 Bots and Beer 0x08 - What AI Should Be... is Just as Complicated as Being Human

by Michael Szul on

The Bots + Beer newsletter ran for about 3 years between 2017-2020 during a time when I was highly involved in chatbot and artificial intelligence development. It was eventually folded into Codepunk as part of the Codepunk newsletter.

Some portions of this newsletter were contributed by Bill Ahern.

What AI Should Be… is Just as Complicated as Being Human

What exactly should AI be? Should it be greater than us? Equal to us? Less than us? Artificial intelligence seems to be this catch-all term for an idealized state of human-machine interaction, but what we've developed thus far deals with patterns and processing, and not humanity in any traditional sense. I read a lot of articles on the advancement of artificial intelligence, and I see many articles about how AI should be greater than humans--better memory, better decision-making ability, faster responsiveness. At the same time, we feel the need to prevent AI from making decisions based on certain data sets in the event of bias, or we look for a cooperation between algorithm and human to be the best combination.

Basically, we seem to have this deep-rooted desire for artificial intelligence to be this magical box--some sort of God machine that is both completely human, but also infallible--and we have a tendency to want to layer it with human traits, while also encouraging it to be "better" than us. The conversation almost borders on philosophical.

Think about it. Religious text tells us that humans were made in God's image, and here we are eschewing that relationship, while simultaneously attempting to build artificial intelligence and robotics in our own image. Ironic, huh? It's one of the reasons why I always hated the "computer is like a brain" analogy. That's backwards. We've envisioned and created computer and logic systems to be like the human brain. We're building those things in our own image because it's all we know.

Therein might lie the fallacy. As we move on towards artificial intelligence, we have this creationary desire through inspiration (whether because of religion, folklore, or science fiction) to create something in our image, and we also have this natural bias towards building what we know (or think we know), but we're aware in the back of our heads that it might not be enough. Humanity might not be enough--certainly not enough of an inspiration… and so while we continue to search for those things that make machines more human, we also search for those things that make it less human--less like the worst in humanity.

Maybe it is within that conflict that we'll find the final inspiration to go beyond the imaginary realm of what we think that artificial intelligence should be, and finally get over the hump that takes AI to the next level.

Algorithms that Don't Remember

Everybody is trying to find the key to greater artificial intelligence--that thing that takes us over the aforementioned hump of just algorithms for word frequency and pattern recognition. Fitting and compression are two phases that neural networks go through as they attempt to refine their algorithms. In compression, the neural network eliminates information about data. Some researches believe that this can be manipulated as a way of strategically forgetting information that ultimately strengthens the algorithm as a whole. Will this help? Or is this another situation where we are using a human metaphor ("forgetting") because we have no other context in which to examine this.

The Moral Obligations of Social and Data Companies?

Facebook absolutely has a moral component to their business model; if any company doesn't, they should. If Facebook is the first tech company to acknowledge these responsibilities (and they aren't), then they need to lead the charge and set an example. Using machine learning to seek out patterns of abuse and hate is critical. Training this technology, moving forward, to better understand the nuances of human interaction than perhaps even humans can: that's the challenge. Data sets fed to AI are flawed and biased because humans are flawed and biased. However, machines can be trained to recognize their own bias and work beyond that.

I would posit that where humans struggle with objectivity, machines would not. The greatest effort is to condition the machine to understand the difference between negative behavior and the free expression of ideas, and people need to be prepared to witness the machines make mistakes as it undergoes this complex process of learning. While machines will be new to this, humans are not, and we must recognize that we have not been doing such a stellar job ourselves of balancing free speech from abusive behavior. My opinion: Companies like Facebook, Microsoft and Google not only have a responsibility to integrate ethical machine learning into their business model, but a moral imperative to teach the machines that we create to possess an ethical model and, hopefully, implement it better than even humans themselves have.

Bot Emulator/MSBot CLI Tutorial Series

I've been ripping through a bunch of chatbot tutorials now that Microsoft has announced the SDK v4 of the Bot Framework, and its many command line utilities. First up? The new Bot Emulator, and all of the cool stuff you can do with the MSBot CLI.

Evil Twin Big Ass Money Stout

Evil Twin Big Ass Money Stout

Evil Twin's BIg Ass Money Stout is a 17.2% Imperial Stout. It's absolutely delicious, but there is one key caveat: DO you like shots of whiskey? Because this beer is as close to shots of whiskey as any beer has a right to claim. IT IS STRONG. It's thick like motor oil and has a rich, dark caramel flavor that is outstanding. And it can probably be consumed with a shot glass.