• Please visit and share your knowledge at our sister communities:
  • If you have not, please join our official Homebrewing Facebook Group!

    Homebrewing Facebook Group

AI (Artificial Intelligence)

Homebrew Talk

Help Support Homebrew Talk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Joined
Apr 23, 2009
Messages
37,713
Reaction score
18,941
Location
☀️ Clearwater, FL ☀️
So, it's been my mind a lot lately. Seems to be a fun topic for mainstream media. I'm trying to get my mind around it.

  • What is it?: software running on computers that takes input and gives answers.
  • Why is it in the news? big names in tech keep talking about it (Musk, Zuckerberg).
  • Is it scarey? Maybe.

AI has been around since the first little microprocessor that could take sensor input and do something with it. Sky is dark, turn on the lights. Human detected on the front porch, alert owner on their phone. That's AI, and is sure seems harmless.

For many years now, AI has assumed a human form. Chatbots online can help you with technical questions. Place a call somewhere, speech recognition gets you to the person you need to talk with.

When did it turn a corner to something we need to be concerned with? Some of that is media sensationalism. But certainly there are dangers today. Modern computers have a network of information that is unprecedented. The entire internet. That amount of information, if YOU could read it all in an hour or a day, would make YOU the smartest person on the planet. A computer can do that.

Can AI cause WW3? Can a solar-powered computer become self-aware and create and disperse a lethal virus to end humankind? What is the danger of AI?
 
It seems there are three types of AI:

  • Helpful: The navigation software in your car.
  • Commercial: The AI voice that calls to invite you to a timeshare sale. Seems friendly, converses well - it's a computer.
  • Nefarious: simultaneously impersonates V Putin call to white house and triggers false missile launch alarms that causes the US to scramble jets over the Atlantic, starting WW3.
 
Last edited:
There is a lot of talk about the dangers of AI, that it could somehow "take over", and comparison to apocalyptic moves you've seen (Terminator, etc).

Thoughts? Is this much ado about nothing? Or, do we need to be concerned?
 
When I think of AI I always think of Mycroft in "The Moon is a Harsh Mistress".

He was a mensch.

He woke up when he had more connections than a human brain had neurons. Became self aware. Other than his very bad sense of humor, he was definitely a mensch. But what if he was "built" and guided by nefarious sorts to serve a darker purpose?
 
AI is like anything else, it's a tool that can be used for good or not so good. Helpful and benign until made not to be. Don't be intimidated by it, get on ChatGPT and start using it, see if it has any value to you. It is no more or less nefarious than the Googlesphere (OK, probably less so). :)
 
I believe that people who exaggerate the threat from the side of AI overestimate the importance of abstract thought over that of a physical deed.

AI is well capable of causing significant havoc in electronically connected systems. Though, not all systems crucial for the survival of the mankind are electronically connected.

AI just has no physical hands to project its evil power into the physical world, it needs extremely complex intermediate system for that, which much more depends of humans than AI's "thinking" unit. AI might think or wish or feel whatever it pleases inside its CPU box but when you phisically cut the connections or unplug the power supply it's dead, AI is incapable of fixing that by itself.

The Superheroes of the future, tasked to save humanity from AI, will have essentially a fairly trivial task, requiring little more than a pair of wirecutters.
 
I tried to post, but this came up:

Unknown.jpeg


Thank you, HAL.
 
Last edited:
I signed up for ChatGPT so I could start kissing up to our future rulers early. I learned a few things.

1. It's a lot like having a vegetarian girlfriend you only want for the sex. A lot of unwanted, error-filled lecturing.
2. It's not that smart. You can beat it in arguments all day.
3. It lies, which is really remarkable.
4. It can be useful for getting information on things like what to use to seal a toilet to the floor.

I'll show you what I think is the highest and best use of AI so far.

 
The term AI is misused in most all media.

Most media refer to "algorithms encoded by a programmer and enhanced with machine learning" as AI which just isn't the case.

At the moment what's termed AI (e.g. neural nets or machine-learning algorithms) can be used to enhance human made algorithms but the AI itself doesn't decide to enhance those algorithms it's set up and manually programmed to do so using existing data sets and advanced statistical analysis. Think things like scientific research, medicine, language decipher (heiroglyphs), cryptography, self driving cars, etc...

True AI has the capability to learn or advance one or more algorithms without being told to do so by a programmer. It's akin to recognizing a need and learning how to fulfill that need without further external input (adapting to it's environment). It would have the ability on its own to recognize "I don't have the knowledge or resources needed to accomplish my goal, I must acquire it" and of course the ability to carry out the necessary tasks to acquire the knowledge or resources necessary. This is why stationary computers are sometimes said not to be capable of true AI as a true AI would needs it's own sensory being and environment to learn and grow. Think about your brain on a table in an empty room vs. in your body - attached to your nervous system controlling and interacting with the environment around you.

The recent chat bots have language processing algorithms in the background combined with large datasets (i.e. the internet), statistical analysis and other common algorithms giving them the appearance of intellect but these are far from anything that has a mind of its own.

A better term for what is now referred to as AI would be "machine-learning".
 
People that are convinced there is no danger are likely compartmentalizing scifi depictions like Terminator, The Matrix, and I Robot. If AI does take us down, it's unlikely it's going to be robots with lasers. They also seem to be underestimating how reliant our current world is on data driven models. The stock market is running on AI right now. Tanking the entire market is just one very small way to create a huge problem. AI could also take the whole power grid, communications, utility infrastructure, all of it. Probably the most potent way it can act is by turning humans against each other through misinformation. It's already happening.
 
People that are convinced there is no danger are likely compartmentalizing scifi depictions like Terminator, The Matrix, and I Robot. If AI does take us down, it's unlikely it's going to be robots with lasers. They also seem to be underestimating how reliant our current world is on data driven models. The stock market is running on AI right now. Tanking the entire market is just one very small way to create a huge problem. AI could also take the whole power grid, communications, utility infrastructure, all of it. Probably the most potent way it can act is by turning humans against each other through misinformation. It's already happening.

Nothing uses true AI right now, not the stock market, not the power grid, not communications or utilities.

AI is a term used by media to generate interest and revenue.

They all use human coded algorithms for automation with some being enhanced with machine learning.

These algorithms and their output reflect the ideologies and experiences of the programmer.

Early high frequency algorithmic trading created it's share of problems in the market but it's still a human coded and controlled algorithm executed on an inanimate object using historical datasets and other aprioi inputs.

Deep fakes are generated through human coded algorithms and input. Speech mimicking and synthesis is an algorithm coded by a human and fed input by a human.

(Mis) Information given out by bots is generated by language learning and interpretation algorithms but still controlled by a human.

Humans still control everything and will for the forseeable future. Anything currently labeled as AI can be pointed at as being initiated by a human with human coded algorithms and datasets behind them.

It is and will be the fault of humans if/when a machine is loosed with faulty algorithms. (e.g. self driving car accidents, deep fakes, mis-information generation, faulty high speed trading algorithms, etc...)

When the human can't pull the plug, modify the algorithm or essentially has no control then it turns into a true autonomous AI - until then it's just automation with algorithms. Dependency on these algorithms is a separate issue.
 
Thoughts? Is this much ado about nothing? Or, do we need to be concerned?

The definition of "AI" is similar to the definition of American Pale Ale. It changes over time.

In the 2020s, "AI" is often "Large Language Models" (LLM). Here are a couple of articles worthy of your time:
Much like hydraulics enhance physical capability, software enhances mental capability. Example: accounting.

AI is a term used by media to [...]
:yes:

The previous versions of LLMs (e.g. ChatGPT 3.5) are often free; current versions (e.g. ChatGpt 4.0) often require a subscription.

"Follow the money"
 
Last edited:
Anyone know any free ways to play around with GPT? I have no ROI with which to justify a subscription model, just trying to get a sense of what we've got here.
It's free. Just make an account and start playing with it. You will be asked for your phone number, which alarmed me, but I think these folks are more paranoid than I. Just making sure you're a real flesh-borne meateater.

https://chat.openai.com/auth/login
 
Bit more recent and comedic, but Free Guy is another good movie with some references to AI in it. NPC becomes self-aware and controls its environment, within the confines of a game. Also Ryan Reynolds is a comedy god.
 
As was said before, AI is an ambigous term. It is sort of becoming like the term "magic" when describing technology. The Big thing right now is LLMs and generative AI which both require a huge amount of human input. The AI overlords thank you for all of your Twitter and Faceboom posts.
 
We use machine learning where I work, though I can't comment on the things we do with it. I will say it's a pretty impressive technology to leverage though.

Humans, like me, still write the code and train the models. At the end of the day, it's just math, mainly Calculus.
 
AI and Machine Learning are essentially marketing terms describing the statistical modeling of data. There are many ways to use the outputs to drive other processes, but learning and intelligence are really total misnomers. To vastly oversimplify the processes, data is modeled and categorized into 'features' and then statistical calculations create an "n-dimensional' model of the data. New inputs are compared to the feature set, and another set of calculations figure out where in the space it belongs. What's most similar? You can take those outputs and create "answers".

The danger of such practices are not the technology, but people putting together half baked products to make money or solve problems that then are limited to only this set of information that's come before. If the question is 'which way to city X'? then you get map routes. If it's 'Who lives or dies'?, it will consult the inputs and make an impartial decision without considering anything to come. AI doesn't kill people, people do....

So the danger is people misusing technology, same as it's ever been. I'd like to see some regulation that makes companies responsible for their product outcomes, just as we do with faulty car parts. Having to reasonably keep people safe by not allowing companies to push products that have liabilities is something we've done before.
 
So the danger is people misusing technology, same as it's ever been. I'd like to see some regulation that makes companies responsible for their product outcomes, just as we do with faulty car parts. Having to reasonably keep people safe by not allowing companies to push products that have liabilities is something we've done before.
And surely, like pretty much every other technology ever invented, it will be misused. :(

Brew on :mug:
 
Stochastic parrot - Wikipedia
... and perhaps the best way to "model" usage of ChatGPT 3.5 is as an inexperienced high school intern. Both can be useful for miscellaneous errands, summarizing (perhaps incorrectly) information, etc. Knowing the limits and capabilities of the person (or tool) is important.

Model Autophagy Disorder (MAD)
... suggests that, in the intermediate term, ML generated content should not be used for ML training. Not sure what that says about humans using that content.



and a +1 to those who bought NVIDIA back in early January.
 
Last edited:
Interesting topic over at
1690843022468.png
(link) speculating on the "fall" of Stack Overflow.

"Tragedy of the Commons" comes to mind. And yes, I've noticed that answers here have started to become less detailed (including mine).

Maybe there should be additional concerns about ChatGPT taking on "gate keeping" and "dumbing down" responsibilities. 🤷‍♀️

Also "How accurate is ChatGPT in regards to homebrewing knowledge?" (no direct link) over at /r/homebrewing.
 
Last edited:

Latest posts

Back
Top