Researchers have found a way to detect and fix a common problem in artificial intelligence models that describe images. These models, called vision-language models, can sometimes make up objects that aren't really there, which can be misleading. The new method, called HaloProbe, uses a combination of the model's own strengths and external information to identify when the model is making up objects. Unlike previous methods that try to modify the model itself, HaloProbe works by giving the model a score that tells it when it's being too creative. This approach is more effective and preserves the model's ability to describe images clearly and accurately.