AI (Artificial Intelligence)

Homebrew Talk

Help Support Homebrew Talk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
ChatGPT goatee?
Screenshot_20240702-200441-467.png
 
I have started watching "What's Next" on Netflix. Very interesting show put together by Bill Gates. In one story, it said within five years insurance companies will approve certain AI tools to be primary care physicians. At first, this sounds really scary. But, consider all of the uninsured being able to access care? Could be awful, could be a game changer. Either way, it is going to make a big impact on our future.
 
I have started watching "What's Next" on Netflix. Very interesting show put together by Bill Gates. In one story, it said within five years insurance companies will approve certain AI tools to be primary care physicians. At first, this sounds really scary. But, consider all of the uninsured being able to access care? Could be awful, could be a game changer. Either way, it is going to make a big impact on our future.
Actually this is something I was thinking about as potentially automating diagnosis from various medical imaging technologies.

You'd take thousands or more scans correlated with the actual diagnosis and patient outcome as training data, feed that into an AI/ML engine, and train it to identify things that led to bad outcomes. Might it be an easy way to catch things that escape a human doctor's eye, maybe X% of the time?

I wouldn't see this as something done to replace human doctors reading scans, of course. It would be augment them. If you catch something early X% of the time more often, how many patient lives are represented by that X%?
 
I have started watching "What's Next" on Netflix. Very interesting show put together by Bill Gates. In one story, it said within five years insurance companies will approve certain AI tools to be primary care physicians. At first, this sounds really scary. But, consider all of the uninsured being able to access care? Could be awful, could be a game changer. Either way, it is going to make a big impact on our future.
If you didn't already hear about this, Microsoft is re-energizing the Three Mile Island nuclear power plant in PA - solely for the purpose of powering a huge AI farm of computers they are going to build there. Imagine how much processing power you expect to have to require a full-size nuke plant.

https://www.pymnts.com/artificial-i...ear-deal-could-spark-new-market-for-ai-power/
 
Imagine how much processing power you expect to have to require a full-size nuke plant.
A while back, Sam Altmann apparently "did the math" (and made some investments) and is recently talking about the amount of additional electricity.

Others are talking about putting big data centers near sources of hydroelectric power.
 
Others are talking about putting big data centers near sources of hydroelectric power.
Yeah, it's a big thing, and not new. I actually just read about a carbon capture project that was planned to be near hydro, but they had to cancel it because of demand from data centers.

The big cloud companies are trying to reduce their emissions footprint, and with data centers being giant users of energy, siting them near non-emitting energy sources makes sense.
 
Yeah, it's a big thing, and not new. I actually just read about a carbon capture project that was planned to be near hydro, but they had to cancel it because of demand from data centers.

The big cloud companies are trying to reduce their emissions footprint, and with data centers being giant users of energy, siting them near non-emitting energy sources makes sense.
What? Active reactors don’t emit beta and gamma rays? 🥴
 
For a number of my "peeps" and from some bloggers I read, this seems to be a typical LLM chat session with a person who actually understands the "body of knowledge":

Assume a question has been asked.

1727474702812.png


If this type of response were a re-occuring pattern from a "noob" intern, I would likely "suffer fools (perhaps) gladly" for the (painfully short) duration of the internship.

YMMV.
 
Actually this is something I was thinking about as potentially automating diagnosis from various medical imaging technologies.

You'd take thousands or more scans correlated with the actual diagnosis and patient outcome as training data, feed that into an AI/ML engine, and train it to identify things that led to bad outcomes. Might it be an easy way to catch things that escape a human doctor's eye, maybe X% of the time?

I wouldn't see this as something done to replace human doctors reading scans, of course. It would be augment them. If you catch something early X% of the time more often, how many patient lives are represented by that X%?
as a physician , i was going to say (hoping) this will never happen. and then say something about how you need a human to interpret signs and symptoms accurately enough to formulate a diagnosis.

however , just today a radiologist report read that there was bleeding in the brain. this is a very serious condition and i was curious to see the extant of the bleeding. i reviewed the films myself and couldnt see anything. i called the radiologist and he said he couldnt see anything either but AI alerted him to it and he was unable to not report it in the reading.

so yes :


If it's not already happening I would expect AI to be directly involved in analyzing imagery for abnormalities as there's already data extant that AI can be better than humans alone for that task.

https://www.breastcancer.org/screening-testing/artificial-intelligence

Cheers!
it is already happening.
 
So, machine learning, LLMs, etc. may increasingly outperform highly skilled humans at various tasks, including matters of life and death. Self driving cars may soon (already?) exceed the ability of human drivers to avoid lethal crashes.

Does it matter that the inevitable errors (diagnostic/treatment, driving, whatnot) are made by a machine rather than a human? Can we sue an AI medical widget for malpractice? Can we arrest a self driving car for a fatal driving mistake? Is liability even relevant to whether to deploy these technologies? Are we even asking whether to deploy them or not? What about Asimov's laws of robotics?
 
as a physician , i was going to say (hoping) this will never happen. and then say something about how you need a human to interpret signs and symptoms accurately enough to formulate a diagnosis.

however , just today a radiologist report read that there was bleeding in the brain. this is a very serious condition and i was curious to see the extant of the bleeding. i reviewed the films myself and couldnt see anything. i called the radiologist and he said he couldnt see anything either but AI alerted him to it and he was unable to not report it in the reading.

so yes :



it is already happening.
I'm sure that the computer analysis errors on the side of caution. That's good. This is how all medicine should be as I'm sure you're aware.

I'm 100% in favor of this exact sort of AI. As is normal (now), all the results of AI is subject to human review.
 
So, machine learning, LLMs, etc. may increasingly outperform highly skilled humans at various tasks, including matters of life and death. Self driving cars may soon (already?) exceed the ability of human drivers to avoid lethal crashes.

Does it matter that the inevitable errors (diagnostic/treatment, driving, whatnot) are made by a machine rather than a human? Can we sue an AI medical widget for malpractice? Can we arrest a self driving car for a fatal driving mistake? Is liability even relevant to whether to deploy these technologies? Are we even asking whether to deploy them or not? What about Asimov's laws of robotics?
As with all new technology, there is the slow drag of human acceptance. This hesitancy forces a LOT of testing and surveillance. My elderly father-in-law still doesn't trust his cell phone and turns it off when he does not need to place a call.

I'm pretty sure that AI will spur great advances in medicine and other areas. There will be abuses, but I think it's a good thing. We might be racing into our own demise (I read scifi lol). For sure, it's gonna happen, so buckle up and let's see where this goes.
 
So, machine learning, LLMs, etc. may increasingly outperform highly skilled humans at various tasks, including matters of life and death. Self driving cars may soon (already?) exceed the ability of human drivers to avoid lethal crashes.

Does it matter that the inevitable errors (diagnostic/treatment, driving, whatnot) are made by a machine rather than a human? Can we sue an AI medical widget for malpractice? Can we arrest a self driving car for a fatal driving mistake? Is liability even relevant to whether to deploy these technologies? Are we even asking whether to deploy them or not? What about Asimov's laws of robotics?
A lot of it depends on the sphere in which it occurs.

Self-driving is a liability problem. I'm on record (in another thread here) that basically says that until I can be passed out drunk in the front left seat of a vehicle, said vehicle mows down several elderly nuns on their way to mass, and I bear ZERO legal responsibility for it, it's not "self driving". Until that point it's advanced driver assist, not driver replacement.

However for things that aren't time-critical like diagnostic, I think the key is that we DON'T have to turn this over 100% to AI. Which means that AI can be seen as advanced doctor assist, not doctor replacement. At that point, much like a driver, the doctor still retains liability.
 
the slow drag of human acceptance.
Some things slide into our lives with rather little resistance. Some things deserve greater care and consideration than they get. Prominent voices have sounded alarms over the possibility (of superhuman intelligence) that OpenAI vigorously seeks. Is there enough, or the right kind, of "drag"?
 
Some things slide into our lives with rather little resistance. Some things deserve greater care and consideration than they get. Prominent voices have sounded alarms over the possibility (of superhuman intelligence) that OpenAI vigorously seeks. Is there enough, or the right kind, of "drag"?
depends. for most things, slow is better. ease into it, right? but given another pandemic, or other catastrophic sitch (war), the gates are wide open and AI will take us to a place we had not imagined.

i'm not a fan or enemy of tech. for sure, i know it can lead us in unexpected paths: we should be wary.
 
Back
Top