8 Comments
User's avatar
Charles Thayer's avatar

Thanks for this. I didn't reallize there was a paper and I've been thinking about this idea for a while because I'm working on a medicial diagnosis system. I wanted to let you know that there are now Prolog mcp servers out there, so one can leverage Prolog in a very ad-hoc way in an agent when helpful to do more "strict reasoning" or "solutions exploring"

Expand full comment
Steve from Steve's Lab's avatar

I wrote a prolog in Python and OpenAI adds facts. That’s how I script agents.

Expand full comment
Shchegrikovich's avatar

Can you share an example?

Expand full comment
Steve from Steve's Lab's avatar

I wrote a small post about it today. I will write more.

Expand full comment
Anthony Garland's avatar

How does it compare to other symbolic reasoning approaches? Could the llm just write sympy (python) code. What limitations did you think of when reading the papers? FYI. I didn't read the paper yet.

Expand full comment
Shchegrikovich's avatar

Writing code helps LLM to reason better. There are two approaches. First - use code in Prolog or Python for validating and inference; this includes running the code and sending the output back to the LLM. Second - use code and use LLM to interpret it as I mentioned here - https://shchegrikovich.substack.com/p/two-methods-and-prolog-to-increase

Expand full comment
Kinder's avatar

Can RAG help with this problem?

Expand full comment
Shchegrikovich's avatar

I think RAG can help with additional facts and context, but real issue of logical/symbolic reasoning it won't fix completely.

Expand full comment