The LLM can interpolate within the knowledge space it’s been trained on, filling in gaps by blending concepts in novel ways. However, it will stay within the convex hull of its training data, constrained by the boundaries of what it has learned.
Philosophy
Can an AI take responsibility?
A mantra repeated several times at a healthcare conference that I attended recently, is that only humans, not AI, can take responsibility for something. This made me think more deeply about what it really means to take responsibility and what, if anything, sets humans and AIs apart in this respect.