Large language models (LLMs) have shown promise in reducing time, costs, and errors associated with manual data extraction. A recent study demonstrated that LLMs outperformed natural language processing approaches in abstracting pathology report information. However, challenges include the risks of weakening critical thinking, propagating biases, and hallucinations, which may undermine the scientific method and disseminate inaccurate information. Incorporating suitable guidelines (e.g., CANGARU), should be encouraged to ensure responsible LLM use.