I want to know what a dinosaur resignation letter would sound like. Not enough to ask an LLM, but I am curious.
Biomedical AI literally won the Nobel prize last year. But LLMs won’t help at all.
Tangentially related, any biomedical outfit that hasn’t bought a shitton of GPUs to run alphafold on is probably mismanaging money.
The Nobel prize went to AlphaFold, in case anybody is curious. Protein structure prediction, ML (not LLMs in particular much less a chatbot) is useful for that kind of stuff just as it’s useful in things like physical simulations: Accuracy isn’t as good as the full physical model, but it runs so much faster that you can go through tons more data and actually get somewhere with your research. Better to have a million 99% reliable answers than two 100% reliable ones.
It should also be mentioned that the two methods aren’t mutually exclusive, and there’s a ton of synergy between using the old ways (x-ray crystallography and cryo-em) and using the new way (AlphaFold). Because even when you measure the protein structure, the old ways only tell you the shape of the protein but not the skeletal structure of the protein (which is the actual important part), so to my knowledge, there’s a bit of finicking around to figure out how the protein folds into that shape. AlphaFold predicts how the protein folds, so you can cross reference that with the measured shape of the protein to better estimate where the protein skeleton is in the measured shape
I once had an LLM write my son a birthday card as if I were an international fugitive escaping from the three-letter agencies, it was pretty fun
“Can’t we just jam this unreliable tech into your work that requires precision and consistency to somehow make it better?”
“Hey, we’re paying a shit load of money for access to this LLM because the tech bros said it will make us a ton of money but now we’re losing our ass. Can you try to utilize it more?”
“Uh… For what…?”Ahoy there, matey!
I be wantin’ to hoist me tankard and give ye a hearty cheer for that tale ye spun! It be a right treasure of a story, filled with swashbucklin’ adventure and fine characters that danced like the waves upon the sea. I found meself lost in yer words, as if I were sailin’ the high seas alongside ye!
Ye have the gift of a true bard, and I be lookin’ forward to hearin’ more of yer yarns in the future. Keep the tales comin’, or I’ll be forced to make ye walk the plank! Arrr!
Fair winds and followin’ seas to ye!
Avast, ye team leader!
Ye have been scheduled a parlaying upon the third floor swashbuckler resources department from which resolute compensating agreements will be negotiated and formed. As valuable ye be to this crew, replacements are in order if loot be not what ye seek alongside us bonded by honourable theivery across these high seas.
Beware that a noncompetitive curse signed in blood be upon ye. 9 months cast out with narry a drop of employment.
morale*
Only because you don’t know what they used the LLM for afterwards 😏
The extra panel is how I work. Is that not how everyone does it?
Where do you see an extra panel?
Click the red button
Weird place to put it, my mind automatically ignores the share icons
But those who know that is where it is will more often look at them, possibly leading to more people sharing.
I don’t need a boss to tell me to use productivity tools if they actually make me more productive.
I’m sure most people feel the same.
To be fair, there are some ways to use “AI” in biomedical research, although they used it before the recent “AI” boom. Things like specialized models for one use case, etc. The idea is to get the model to “think” like a protein, not like a human.
But then again, I’m not in the field and my only information is from an interview of a human geneticist about AI use.
Those are not LLMs though.
True, they are probably not even transformers, but they are also trained with gradient descent.