As I reflect on news from the last week, it doesn't seem like an exaggeration to say we're at an inflection point in modern history with respect to the use, influence, and capabilities of AI.
As such, just curious if biostars members would support a policy on the use of AI, and if so, where would it be implemented? Should it be presented alongside other best practices for posting, e.g. use of reprex, or should the policy focus on its use for answers to coding-related (or other) questions? Or, neither/both?
As you indicate, AI in some form or other is here to stay and we can't avoid it in our lives and on biostars.
Main issue I see is the validation of information. If an answer is AI assisted (and code related things fit in this category well) but not tested then unless the original poster comes back and validates the information, it would be way too easy to start accumulating "answers" that may on the surface appear OK but may either not work or at worse lead to incorrect results.
Answers generated solely by plugging a question into an AI tool (BTW this actually works reasonably well) runs the risk of potentially being incorrect, if the person generating the answer does not have domain knowledge to do first pass validation. One could argue that the voting system (and moderators) can manage this but we will need to see how well this works.
Practically, a disclaimer/attribution at the top of a post where (some) content has been generated using AI assistance is needed (which is what a few posters have been doing). Future visitors can then consider that tidbit as they consume the content.
GenoMax the funny thing is, I couldn't agree more with literally everything in your post. I sat there nodding my head for the duration of the post. But, ironically, I feel even farther from knowing what the best place to start is.
This, bit, here:
Practically, a disclaimer/attribution at the top of a post where
(some) content has been generated
definitely catches my attention, though. GenoMax , do you think that creation of a label or some such would help? Or, perhaps a button (like what we have for link, quotation, code, picture presently) to highlight any portion of a response that is machine generated is warranted? The reason I ask is, while the OP could add such a tag, the respondent couldn't.
It will be up to the people posting the AI assisted/generated comments/answers to add a disclaimer/note line to their posts for now. A button could be added to the editor so a "standard" disclaimer can be added automatically to make this painless.
I'd say just point to the ChatGPT conversation - there seems to be a feature that lets one link to the conversation. Or, just show a screenshot. That way, AI-powered answers may become less self-reinforcing - unless the next version of GPT consumes images using OCR.
GenoMax the funny thing is, I couldn't agree more with literally everything in your post. I sat there nodding my head for the duration of the post. But, ironically, I feel even farther from knowing what the best place to start is.
This, bit, here:
definitely catches my attention, though. GenoMax , do you think that creation of a label or some such would help? Or, perhaps a button (like what we have for link, quotation, code, picture presently) to highlight any portion of a response that is machine generated is warranted? The reason I ask is, while the OP could add such a tag, the respondent couldn't.
Thanks so much for your thoughtful reply.
VAL
It will be up to the people posting the AI assisted/generated comments/answers to add a disclaimer/note line to their posts for now. A button could be added to the editor so a "standard" disclaimer can be added automatically to make this painless.
I'd say just point to the ChatGPT conversation - there seems to be a feature that lets one link to the conversation. Or, just show a screenshot. That way, AI-powered answers may become less self-reinforcing - unless the next version of GPT consumes images using OCR.