-

@ Anthony
2025-02-25 13:55:51
nostr:nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqw9ctn8dd5c6s6wlkzh8g4uuadjpjd0v9qj5kl9a3ql49dvaz2ayq293dym I'm sorry to be a broken record, but it's not a science experiment your lab is doing if "we send it some text and a picture and get text back, but what is it doing?" is the kind of question you're asking about it. Unless your research question literally is only "how does this LLM work?", that is.
Worse, my lab mate keeps doing more prompt engineering, data pre-processing, and restricting the LLM's vocabulary to make it work. That's a lot of effort the LLM was meant to take care of which is becoming our problem instead.
If someone has a goal in mind and is trying to "make it [a system] work" to fit the pre-conceived goal, then this is not hypothesis-driven investigation, it's goal seeking. That sounds like engineering, not science. I hope everyone involved understands this?
it's hard to tell when we've crossed that line.
Personally? The line was crossed when the decision was made to use an LLM. The hypothesis or research question should have driven that choice, not the other way around ("can an LLM do X?" is not a research question). There's a reason so much LLM-related research is of such poor quality; don't fall into the same hole!