The best part about this is that you know the type of people/companies using langchain are likely the type that are not going to patch this in a timely manner.
If I want to cleanup, summarize, translate, make more formal, make more funny, whatever, some incoming text by sending it through an LLM, I can do it myself.
CVE-2025-68664 (langchain-core): object confusion during (de)serialization can leak secrets (and in some cases escalate further). Details and mitigations in the post.
The best part about this is that you know the type of people/companies using langchain are likely the type that are not going to patch this in a timely manner.
Can you elaborate? Fairly new to langchain, but didn't realize it had any sort of stereotypical type of user.
WHY on earth did the author of the CVE feel the need to feed the description text through an LLm? I get dizzy when I see this AI slop style.
I would rather just read the original prompt that went in instead of verbosified "it's not X, it's **Y**!" slop.
> WHY on earth did the author of the CVE feel the need to feed the description text through an LLm?
Not everyone speaks English natively.
Not everyone has taste when it comes to written English.
If I want to cleanup, summarize, translate, make more formal, make more funny, whatever, some incoming text by sending it through an LLM, I can do it myself.
CVE-2025-68664 (langchain-core): object confusion during (de)serialization can leak secrets (and in some cases escalate further). Details and mitigations in the post.