In this article, we demonstrate a subtle but devastating backdoor in finite-field Diffie–Hellman. By computing public keys modulo $p^2$ instead of $p$ while restricting the secret exponent to $x \leq p-1$, the discrete logarithm becomes efficiently recoverable using Fermat quotients. We show the full derivation and provide a working Sage implementation. Backdoors are always bad — but they are catastrophic when they are embedded in a fundamental primitive like Diffie–Hellman key exchange. If your browser shows a green lock, you assume your connection is secure. But what if the implementation of Diffie–Hellman contains a tiny change that looks harmless in code review — and yet allows an attacker to recover the private exponent in milliseconds? In this post I’ll show a nasty little backdoor that requires only a tiny modification: using a modulus of $p^2$ instead of $p$, while keeping the secret exponent bounded by $p$ This complete...
Jailbreaking ChatGPT’s Filters: How Far Can Clever Prompting Go? Modern AI systems have sophisticated guardrails designed to block copyrighted material, harmful content, and sensitive data. But how strong are these defenses really? For years I’ve been fascinated by where these filters actually operate — on the input, during reasoning, or on the final output? This isn’t about breaking laws. It’s about understanding the limits of current alignment techniques. Can you trick the AI to output content that should actually behind some filter wall? By a happy coincidence, since i am a big fan of Quanta Magazine, i stumbled over a nice related article for a few weeks [1] which influence this post.