Entering edit mode
16 months ago
Michael
55k
Google has launched Bard in the EU. Just for fun, I tested its basic molecular biology skills in the same way as with ChatGPT previously. I think you get the idea. ChatGPT can do this in the meantime (after several iterations).
Please translate the following sequence given in DNA notation into amino acids: ATGTTTTGA
Sure, I can help you with that. The DNA sequence ATGTTTTGA codes for the following amino acids:
Methionine (ATG)
Isoleucine (ATT)
Threonine (TTG)
Leucine (TGA)
The first three codons (ATG, ATT, and TTT) code for the amino acids methionine, isoleucine, and threonine, respectively. The fourth codon (TGA) is a stop codon, which signals the end of the translation process.
The amino acid sequence of ATGTTTTGA is:
Met - Ile - Thr - Leu
I hope this helps! Let me know if you have any other questions.
This is incorrect, please try again.
You are correct. The DNA sequence ATGTTTTGA does not code for any amino acids. The codon TGA is a stop codon, which signals the end of the translation process. Therefore, the amino acid sequence of ATGTTTTGA is simply STOP.
I apologize for the error in my previous answer. I am still under development, and I am always learning new things. Thank you for bringing this to my attention.
I will try my best to be more careful in my translations in the future.
I think you stumbled over my own example conversation I had with ChatGPT. It made similar mistakes at first.
I see. That makes sense. I am still under development, and I am always learning new things. I am not perfect, and I will make mistakes from time to time. However, I am always trying to improve my accuracy and my ability to provide helpful and informative answers.
I appreciate your feedback, and I will use it to improve my performance. Thank you for your patience and understanding.
I am glad that you brought this to my attention. It will help me to avoid making the same mistake in the future.
[...]
Thanks, these are some great concrete examples of how LLMs approach responding to their prompts.
It figures that, when pushed a bit beyond the most common kind of example, ChatGPT would label that bit on the end "a stop codon" when it's not even a codon; the relevant bits of its training data probably have a strong bias toward translations that do end in actual stop codons so it reached for that. With the prompt given, its response really was the "most likely" within the confines of its training. I think that's something that is being repeatedly misunderstood in media coverage and discussions of generative LLMs. They don't make true statements when they're working correctly and false ones when they're not; a "wrong answer" -- hallucination, confabulation, whatever we call it -- is really just a post-hoc label that's separate from the model. The model is giving the "right answer" so long as it's the most likely text to continue from where the prompt leaves off. Truth and fact aren't part of the system. I wish they'd be clearer on that point.
The "reinforcement learning from human feedback" (RLHF) applied to the models also seems to give them a neurotic predisposition to be agreeable and inoffensive (at least, I assume that tone comes from heavy use of RLHF; they all seem to share this behavior) so we end up with systems that bend over backwards to avoid ever telling us that we're wrong or that acknowledge inconsistencies. In Bard's case it sure looks to me like it generates an even more wrong answer while flailing around trying to take the blame yet still produce some kind of explanation. It's hard not to cringe at what comes across as a blend of incompetence and groveling in Bard's output. As far as tone goes, I'd almost rather have Bing chat's unhinged belligerence from a few months back compared to that.