• Please visit and share your knowledge at our sister communities:
  • If you have not, please join our official Homebrewing Facebook Group!

    Homebrewing Facebook Group

Using AI with getting questions answered - Has anyone dug into it?

Homebrew Talk

Help Support Homebrew Talk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

sixstring

Supporting Member
HBT Supporter
Joined
Apr 17, 2024
Messages
204
Reaction score
215
Location
AZ
Ok, so I'm a bit of a tech nut. I've loved computers since I little (if you can call a Sinclair ZX Spectrum 16k a computer :D). But the point is, I'm starting to find a lot of uses for AI within my home and work life.

Has anyone else here been using AI to get help or provide information on brewing?
If you do, which AI engine do you use? The reason I ask this is I find that while ChatGPT is fast, it's also a lot less accurate, it scrapes websites and doesn't seem to sift out the answers well.
So I started using Grok (yes I'm aware of it's relationship to certain well known figures, but I'm not interested in that, just what the AI can help me with). It's a fair bit slower than other AI, however, I have found that due to the fact it has full access to X, it seems to be able to pull a _lot_ more detailed information about things.
The thing with AI that I have noticed and learnt is that in order to get the best response from it, you must make sure to tailor your opening statements properly.
If you ask a simple one line sentence, then chances are you're going to get a rubbish response back, which was unhelpful and a waste of your time. Then you may find you don't bother again since you got a bad taste in your mouth over the first attempt.

Here's a good example of a tailored opener:
Hello Grok,
I'm a home brewer in Phoenix, Arizona. The ambient temperature of the rooms in my house never goes below about 72F, which isn't conducive to brewing lager, as that requires much lower temps.
However, I would really like to try my hand at brewing a refreshing lager for summer time drinking. I don't want to buy another fridge or freezer to use as a fermentation chamber due to lack of space and also the cost of those.
I would prefer not to spend a lot of money on this, and it needs to be something that doesn't require met to attend it every few hours. The lower the maintenance the better.
With all that in mind, what options would you suggest for me?
If you want to see the response I got, which was quite extensive and gave me some great options, including one I hadn't thought of too, then here's the link to see it:
https://grok.com/share/c2hhcmQtMg==_34b2e09d-aa94-4353-a79d-eec5a04e64a2

If anyone's had any experience using other AI engines, I'd love to know what you thought of them.
 
A good forum discussion would have yielded a couple of additional options and perhaps a proven recipe/process (or two).

"Ambitious intern" may currently be the best description for LLMs.



1746653372857.png



LLMs appear to be good at summarizing a body of knowledge. Like an ambitious intern (often right, often wrong, rarely in doubt), it can be useful - as long as you understand it's role and place.



Just remember it's still statistical word generation. Somewhere in Keynote: AI without the BS, for humans (NDC 2025) there is a demonstration of this - including showing the various paths that could have been taken (based on the prompt) and chance (percentages) of taking a path.



If I were a responsible for a web site, intended for humans, web enabled LLMs would concern me - and I would consider approaches for blocking them.

Without going into details, most web sites are scaled assuming people read pages (so let's assume a page a minute). A prompt that 'reads' the internet in real time can 'read' 10 pages a minute. Is it fair to expect that every site hosting content will scale up 10x to support LLMs summarizing their content? Feel free to quibble on the details - maybe it's only 2x maybe it's 25x - but any scenario, please explain the value to the content provider for LLMs. Or, better yet, find a model that rewards the content provider for providing content to LLMs.
 
With regard to "prompt writing", consider the following approach
  • have a "back and forth" typing session with an LLM
  • request a prompt that consolidates the session
  • take the prompt to another LLM
  • "rinse, lather, repeat"
 
Ya gots to be cautious. There's a thread regarding chestnuts and unexpect post-fermentation activity that on its first day I consulted one of the freebie LLMs and it claimed chestnuts contain glucoamylase. That thread was still percolating two weeks later when I asked the same agent a related question - and then it claimed chestnuts contain no glucoamylase - and cited the HBT thread! :oops:

Cheers!
 
One of the main problems I see with LLMs is that they suck up all of the information available to them. If they are using source material from social media sites, they are going to pull in a lot of misinformation, and outright false information. Without a way to curate the inputs, your outputs are likely to be contaminated with a lot of garbage. If I really want solid information, I prefer sources that can be traced back to true authorities on the subject matter.

Brew on :mug:
 
grok,
i use it for recipe development. it is quick and has been good so far. for ex:

make me a pale ale recipe with 4 oz of citra 2 oz of mosaic and and ounce of galaxy hops


Below is a 5-gallon all-grain American Pale Ale recipe tailored to use 4 oz of Citra, 2 oz of Mosaic, and 1 oz of Galaxy hops. This recipe emphasizes the bright, citrusy, and tropical flavors of these hops while maintaining a balanced pale ale profile. Assumes a 70% brewhouse efficiency; adjust grain amounts if your system differs.


Citra-Mosaic-Galaxy American Pale Ale (5 Gallons, All-Grain)
Stats
Original Gravity (OG): ~1.052Final Gravity (FG): ~1.012ABV: ~5.2%IBUs: ~40SRM: ~6 (light amber)
Ingredients
Grain Bill
9.5 lbs Pale 2-Row Malt (85%)
1 lb Munich Malt (9%)
0.5 lb Crystal 20L (4.5%)
0.25 lb Flaked Wheat (1.5%) – for head retention and slight haze
Hops
0.5 oz Citra (12% AA) @ 60 min – bittering
1 oz Citra @ 10 min – flavor
0.5 oz Mosaic (12% AA) @ 5 min – flavor/aroma
0.5 oz Galaxy (14% AA) @ flameout (steep 15 min) – aroma
1 oz Citra + 0.5 oz Mosaic + 0.5 oz Galaxy – dry hop (5-7 days)

Total Hops Used: 4 oz Citra, 2 oz Mosaic, 1 oz Galaxy


YeastSafale US-05 (1 packet, rehydrated) or equivalent clean ale yeast (e.g., Wyeast 1056)Other1 tsp Gypsum (optional, for hoppy water profile)Whirlfloc tablet @ 10 min (for clarity)Brewing ProcessMashWater: 3.5 gallons mash water, 4.5 gallons sparge water (adjust for your system)Mash Temp: 152°F for 60 minutesMash Out: 168°F for 10 minutes (optional)Boil (60 minutes)Add 0.5 oz Citra at 60 min (start of boil).Add 1 oz Citra and Whirlfloc at 10 min.Add 0.5 oz Mosaic at 5 min.Add 0.5 oz Galaxy at flameout; steep for 15 min while cooling.FermentationCool wort to 66°F, transfer to fermenter, and oxygenate.Pitch US-05 yeast.Ferment at 66-68°F for 10-14 days.Add dry hops (1 oz Citra, 0.5 oz Mosaic, 0.5 oz Galaxy) after primary fermentation (around day 5-7). Dry hop for 5 days.Cold crash at 35°F for 1-2 days (optional, for clarity).PackagingBottle or keg with ~2.5 volumes CO2.Condition for 1-2 weeks for optimal carbonation.

this looks pretty close to a hoppy APA.
 
without a way to curate the inputs
Kadrey v. Meta Platforms, Inc. (3:23-cv-03417) District Court, N.D. California.

tl;dr? the claim is that copyrighted information was used to train LLMs.



As I asked above ...
please explain the value to the content provider for LLMs.
... and, yes (right or wrong), we are content providers here at HomeBrewTalk.
 
Last edited:
With regard to "prompt writing", consider the following approach
  • have a "back and forth" typing session with an LLM
  • request a prompt that consolidates the session
  • take the prompt to another LLM
  • "rinse, lather, repeat"
This is definitely the main thing to take away when using AI.
Being able to dig into things deeper and then come back to other options it gave earlier is a great thing.

A good forum discussion would have yielded a couple of additional options and perhaps a proven recipe/process (or two).
This is true, and I can't disagree. However, forums are also a great place to "lose" information. It's very time consuming to sift through a million posts looking for that one comment you knew you read 4 years ago and want to find it. Forums are also very conducive to repeat posting. I'm pretty sure that if my forum search skills were better honed, I'd of been able to find all the answers to my questions I've posted here. However, I often find it's easier to just post a new thread with my question and it'll get new information from people too.

There's definitely use cases for AI and for Forums. Both are great tools to use for any application (I use a forum for learning to repair my cars when I don't know how to do something on them. Not sure AI could give me the best answers when the people who help me have the same exact make and model of car and have done the work and taken photos already to help me).

I was mainly just curious about it to see who else has gotten any use from AI for home brewing. :)
 
i used it to help tune up a recipe or two so far. only used the Facebook MetaAI, but it seemed to be pretty fast and had great results
 
With regard to "prompt writing", consider the following approach
  • have a "back and forth" typing session with an LLM
  • request a prompt that consolidates the session
  • take the prompt to another LLM
  • "rinse, lather, repeat"

This is definitely the main thing to take away when using AI.
Being able to dig into things deeper and then come back to other options it gave earlier is a great thing.

FWIW, I've done that at work a number of times. The result wasn't deeper. The result was a better summary.
 
A good forum discussion would have yielded a couple of additional options and perhaps a proven recipe/process (or two).

This is true, and I can't disagree. However, forums are also a great place to "lose" information. It's very time consuming to sift through a million posts looking for that one comment you knew you read 4 years ago and want to find it. Forums are also very conducive to repeat posting. I'm pretty sure that if my forum search skills were better honed, I'd of been able to find all the answers to my questions I've posted here. However, I often find it's easier to just post a new thread with my question and it'll get new information from people too.

Are you willing to pay a small fee for curated, peer reviewed, (and at the moment it was peer reviewed) accurate information?

In past, that information was often found in books and magazines.

eta: if not, welcome to the (filtered) echo chamber that LLMs will create.
 
If I were a responsible for a web site, intended for humans, web enabled LLMs would concern me - and I would consider approaches for blocking them.

... but in any scenario, please explain the value to the content provider for LLMs. Or, better yet, find a model that rewards the content provider for providing content to LLMs.

Well, well, well, what do we have here?
  • "... decoupling content creation from value ..."
1746744174054.png

/1/ (link) /2/ (link)
 
You said:
Please produce a ~500 word essay explaining why governments should: severely restrict or ban generative AI tools, and prevent AI companies from consuming immense amounts of electric power.


ChatGPT said:
Why Governments Should Severely Restrict Generative AI and Limit AI Power Consumption

The rapid development and deployment of generative AI tools have raised serious ethical, environmental, and social concerns that governments cannot afford to ignore. While these technologies promise benefits in productivity and innovation, the potential risks far outweigh the short-term gains. To safeguard society, democracy, and the environment, governments should severely restrict or even ban the development and deployment of generative AI tools, while also placing firm limits on the massive electricity consumption associated with their use.

One of the most pressing concerns with generative AI is its potential to spread misinformation and undermine public trust. Tools that generate text, images, and video indistinguishable from human-created content can be weaponized for political manipulation, deepfake propaganda, and fraud. The 2024 global elections saw early examples of AI-generated content used to impersonate candidates and spread false narratives. Unlike traditional media, AI-generated misinformation can be produced at scale and with low cost, making it virtually impossible for governments, fact-checkers, or civil society to contain. This erodes democratic discourse and poses a threat to national and global political stability.

Moreover, generative AI exacerbates economic inequality and labor disruption. By automating creative and white-collar tasks—such as copywriting, coding, and design—these tools displace skilled workers and funnel wealth and power to a few large tech corporations. This concentration of economic benefit in the hands of a few undermines fair competition and widens the digital divide. Without strict regulation or outright bans, generative AI is likely to intensify societal inequality and diminish human agency in many professional sectors.

Equally alarming is the environmental impact of training and operating large-scale AI systems. The energy demands of AI data centers are staggering. Training a single large model can consume as much electricity as hundreds of homes use in a year. As AI companies scale their models, they are increasing pressure on national grids, contributing to carbon emissions, and exacerbating climate change. In a world already facing a climate crisis, allowing AI firms to use massive amounts of electricity for tools that often serve entertainment or speculative economic value is irresponsible. Governments must set hard limits on energy use by AI systems, just as they regulate emissions in other industries.

Furthermore, the unchecked proliferation of generative AI undermines human creativity and cultural integrity. As machines mimic artistic expression, they dilute the value of original human-made content. This commodification of creativity erodes cultural heritage and challenges the role of the artist in society. If governments do not intervene, we risk a future where art, literature, and journalism are driven not by human insight or emotion, but by predictive algorithms trained on past data.

In conclusion, generative AI poses a complex threat to truth, jobs, the environment, and cultural integrity. While innovation should not be stifled indiscriminately, the unique dangers presented by this technology demand bold, preventive action. Governments have both the authority and responsibility to restrict or ban generative AI tools and impose strict limits on their energy consumption to protect public good over private profit.
 
......

If you want to see the response I got, which was quite extensive and gave me some great options, including one I hadn't thought of too, then here's the link to see it:
https://grok.com/share/c2hhcmQtMg==_34b2e09d-aa94-4353-a79d-eec5a04e64a2
Thought I'd read the response, but the link just opens a blank page on grok.

I had wondered if they could be used to collate prizewinning recipes, for a particular beer style.
Has anyone tried getting that, from AI/LLMs?

Someone on utube did it (manually?) a while back. With 20 or so 1st place award recipes, for strong imperial stouts. He compared every ingredient and method, so you could see, for instance, what the average percentage of black malt was.
I developed my own 16% recipe, around the average winning numbers. And named it 'Award Stout'.

EDIT:
Tried asking Grok about the ingredients used in most recent homebrew competition prizewinning imperial stout, in a few ways.
Every answer told me 13th Day Imperial Stout was American Homebrewers Association medal-winner, but couldn't give the year it won (though Google does show it). One also gave the ingredients for a modern recipe posted on X (but not a competion winner), saying the quantities weren't specified and listing a "likely typical figure" instead.

Then went on to offer a barchart comparing ingredient percentages between those recipes. Which isn't possible, when amounts not known!
 
Last edited:
grok and i had an interesting discussion today about holding off the roasted malts til the end of the mash to reduce there astrignent contribution.

but adding debittered black malt early in the mash to help lower and stabilize the pH while hopefully not causing astringency instead of adding lactic or saurmalt. early in the mash

i dont know if this is accurate but its an interesting idea none the less
 
EDIT:
Tried asking Grok about the ingredients used in most recent homebrew competition prizewinning imperial stout, in a few ways.
Every answer told me 13th Day Imperial Stout was American Homebrewers Association medal-winner, but couldn't give the year it won (though Google does show it). One also gave the ingredients for a modern recipe posted on X (but not a competion winner), saying the quantities weren't specified and listing a "likely typical figure" instead.

Then went on to offer a barchart comparing ingredient percentages between those recipes. Which isn't possible, when amounts not known!
hallucinations is the polite term that explains what the LLM returned in response to your keystrokes.

LLMs (at the moment) are statistical word generators - given the previous words/phrases/context, display what is the most likely next word/phrase.

Need evidence that LLMs are statistical word generators? [NDC London 2025] Keynote: AI without the BS, for humans ...

Like any other tool (e.g. hydrometers, refractometers), there is a set of problems where LLMs can be useful.
 
grok and i had an interesting discussion today about holding off the roasted malts til the end of the mash to reduce there astrignent contribution.

but adding debittered black malt early in the mash to help lower and stabilize the pH while hopefully not causing astringency instead of adding lactic or saurmalt. early in the mash

i dont know if this is accurate but its an interesting idea none the less
Be careful - as you may be entering into an LLM induced digital "echo chamber" - where LLMs statistically repeat phrases from what humans most frequently wrote in the past.
 
I think a good example of how contradictory these "AI" LLMs can be was represented earlier this year where I was looking for a new growler, but the one I have can hold 2 pints (just barely short of a liter, and it's got a specific brewery design and logo on it and works amazingly well), and that's great for how I use growlers 90% of the time, but I wanted one that could hold between 1.5 and 2 liters. So I asked "Can the BRAND MODEL be used to hold beer?" (with BRAND and MODEL having the actual brand and model) and it does this super enthusiastic, confident response of "YES, the BRAND MODEL is an excellent choice for holding beer as it can..." and it goes into this long list of reasons why. And I just stopped and thought "Really?" And I asked the exact same question but I just changed "beer" to "carbonated beverages" and it said "No, unfortunately, the BRAND MODEL is not designed to hold carbonated beverages and could potentially cause harm to the bottle or you..." and explains why it's not good. And I'm thinking "Then why the hell did you just say it's an EXCELLENT choice?!" I've always been suspicious of LLMs for how haphazardly they can gather and throw data together, hence why I rephrased the question immediately afterwards, but obviously if it can't hold carbonated beverages, it can't hold beer.
 
Also obvious: LLMs don't know anything. But they write well.
Right. Part of why I put "AI" in quotes. It's not really artificial intelligence. It's a statistical linguistic system that predicts what's most likely to come next, but it also results in a ton of nonsense being spewed in an extremely confident-sounding and plausible-seeming manner. It's specifically that feeling of absolute confidence being written at a high quality that gets a lot of people accepting things that are wrong without question.
 
grok and i had an interesting discussion today about holding off the roasted malts til the end of the mash to reduce there astrignent contribution.

but adding debittered black malt early in the mash to help lower and stabilize the pH while hopefully not causing astringency instead of adding lactic or saurmalt. early in the mash

i dont know if this is accurate but its an interesting idea none the less
An example of the lack of nuance the LLM's provide:

Does pH between 5.0 - 5.8 have anything to do with astringency? - no.
Does your water composition have anything to do with mash pH? - yes.
Was water composition even mentioned? - no
Do a lot of brewers need to work to get pH higher with dark beers? - yes.

The real answer is to use debittered malt if one is concerned with astringency and mash as usual. pH balancing as needed and normal.

The debittered is the only part of the whole response that will have an impact on astringency (taste/flavor). The fact that it used the word "stabilize" makes me think of that stupid 5.2 product which the AI probably thinks is the best thing since sliced bread!

Will this AI stuff get better? Probably to some extent but I am yet to be impressed.
 
An example of the lack of nuance the LLM's provide:

Does pH between 5.0 - 5.8 have anything to do with astringency? - no.
Does your water composition have anything to do with mash pH? - yes.
Was water composition even mentioned? - no
Do a lot of brewers need to work to get pH higher with dark beers? - yes.

The real answer is to use debittered malt if one is concerned with astringency and mash as usual. pH balancing as needed and normal.

The debittered is the only part of the whole response that will have an impact on astringency (taste/flavor). The fact that it used the word "stabilize" makes me think of that stupid 5.2 product which the AI probably thinks is the best thing since sliced bread!

Will this AI stuff get better? Probably to some extent but I am yet to be impressed.
it never mentioned an association between astringency and pH, that wasnt clear in my post. i gave it the grain bill and my water profile and it suggested some salt additions. i then asked about late roasted grain additions and it reccomended adding i think it said 20% of the roasted grains early to lower pH. it also rec adding lactic or acid malt to the mash early to lower the pH. i dotn have a pH meter so iwas nervous about adding lactic and i dont have any lactic anyway, . it never said anything about pH stabilizer and it never has in any of the recipes i have developed using grok.

the biggest problem i find with grok and my daughter has with chat gpt is that they are generally very positive and tend to give you what you want to here. it makes sense because you wouldnt want to interact much with someone who always tells you that you are wrong. but it seems to have confirmation bias.

i guess asking it less pointed questions would help.

for example i asked could i get away with adding some of the roasted malts early instead of adding lactic and it said yes . i prolly should of asked if there is a substitute for adding lactic acid or saurmalt.

i assumed that the debittered would lower the pH. similar to saurmalt. i figured since its less astringent adding all of it wouldn't hurt adding it at the start and grok agreed but maybe like i said its confirmation bias in the way i asked.


anyway for 5.5 gallons i am adding
10 lbs of 2 row
1/2 lb of chocolate (20% at start 80 in the last 15 mins.)
1/2 lb of roasted barley ( 20 early 80 late)
1/4 lb debittered malt at start of mash
12 oz quaker rolled oats

a pinch of gypsum (1/4 tsp lol)

hopefully the mash pH is close enough.

i am sure i will make good beer. especially dark beer my water always makes good dark beer.
 
Back
Top