Python, Javascript and UNIX hacker, open source advocate, IRC addict, general badass and traveler
432 stories
·
7 followers

@adlrocha - Google's ZKP-hidden quantum attack

1 Share

This week started with a bang. Anthropic accidentally leaked the source code for Claude Code, and within hours someone had kicked off a clean-room rewrite in Python. The internet, understandably, caught fire, and it seemed like the perfect topic to write about this week. As there were still lots of threads open, and people trying to make sense of the code base, I decided to leave it for when the dust settles (that way I could read the code base myself to draw my own conclusions before rushing into writing anything).

Fortunately, amidst the noise of Claude Code’s leak, Google Quantum AI made a release (Google featuring this newsletter again) that didn’t get the attention that I think it deserved. It was the perfect excuse to write again in this newsletter about quantum computing.

I’ve been fascinated by quantum computing since I was first introduced to it (at the time, I even wrote a patent that leveraged quantum information to reach consensus in distributed networks, but I’ll spare you the details for now). From all the new fancy technologies coming up these days, quantum computing is, to me, one of the hardest technology timelines to read. Since I’ve started following and studying closely there’s been an enormous amount of hype, a few winters, a lot of exciting progress, and no immediate use case to show off yet.

I’ve been studying the technology on the side for years, but never worked on it professionally. My only hands-on experience with the technology has been through a few Qiskit hackathons many years ago (I guess the barriers were high). I’ve been meaning to go back and get hands-on time with something like IBM’s publicly available quantum systems just to recalibrate my intuition, but I never find the time or motivation. This paper made me feel that urgency more acutely that I needed to recover this rusty skill.

The TL;DR of what Google dropped this week is a whitepaper claiming to reduce the quantum resources needed to break Bitcoin’s cryptography by roughly 20-fold. Cryptocurrencies and quantum computing… you can imagine how this topic took preference over Claude Code’s leak.


Shor’s algorithm and the hard problem underneath ECDSA

Before we get to the papers, let’s set the stage so everyone (independently of your knowledge about the space) is on the same page. This means taking a quick trip into the cryptographic primitives that currently protect every Bitcoin and Ethereum transaction.

When you sign a transaction on Bitcoin or Ethereum, you’re using a cryptographic primitive called ECDSA: the Elliptic Curve Digital Signature Algorithm. The security of ECDSA rests entirely on one hard problem: the Elliptic Curve Discrete Logarithm Problem (ECDLP). Here’s a high-level intuition of what this problem is all about.

An elliptic curve over a finite field forms a specific algebraic structure: a prime-order cyclic group. You’ll see that this really matters when we discuss how it can be attacked by quantum computers. The group is generated by a single distinguished point G (the generator), and every element of the group can be written as k·G for some integer k. Your private key is that integer k. Your public key is Q = k·G, the generator point “multiplied” by your private key, where multiplication means repeatedly applying a specific point-addition rule defined by the curve’s geometry.

Given Q and G, recovering k by brute force classically (meaning with our current computing systems) requires roughly 2^128 operations on Bitcoin’s curve (secp256k1). That’s a few hundred undecillion operations, effectively the age of the universe at a billion operations per second. The problem is hard in one direction only. Computing Q from k is instant. The reverse is infeasible.This asymmetry is what cryptographers call a hard problem, and this is why they are so appealing to create cryptographic primitives out of them.

Remember my post a few months ago about complexity theory and P=NP? ?This has a lot to do with that. Cryptographic primitives are built on the assumption of hard problems complexity. Technically, ECDLP sits in NP∩co-NP, it’s not known to be NP-hard in the strict complexity-theoretic sense, and most cryptographers believe it isn’t. It isn’t known to be in P either. Another hard problem commonly used for cryptographic primitives is integer factorisation, the hard problem underlying for instance RSA, which sits in exactly the same class: NP∩co-NP, not NP-complete, not known to be efficiently solvable. Both problems are “believed hard” without being provably hard in the complexity-theoretic sense.

Both problems resist classical attacks for the same reason: no efficient algorithm has been found after decades. And here is where Shor’s famous algorithm enters the scene.

Shor’s algorithm, published in 1994, exploits the cyclic structure of the group. Rather than brute-forcing the keyspace, it uses quantum Fourier transforms and period-finding on the multiplicative structure of the group to extract k from Q in polynomial time. The precise gate complexity is approximately O(n² log n log log n) in the bit-length n of the key (often cited as O(n²) for shorthand) though the full form matters when you’re counting Toffoli gates against a hardware budget (these gates are the quantum equivalent of a controlled-controlled-NOT, used to implement AND operations reversibly. Think of it as the universal reversible gate of quantum computing, they will be important when we discuss the contributions of the papers released). For a 256-bit key, that’s tractable, if you have a sufficiently large quantum computer.

The question has always been: how large is “sufficiently large”?I think you see where I am getting at. The papers released this week seem to have changed our existing intuitions about how many qubits are needed for Shor’s algorithm to break our existing cryptography.


The two papers released

The two papers that dropped this week have made some experts reevaluate their timelines about the security of the underlying security of blockchain systems that haven’t adopted post-quantum:

The Google Quantum AI whitepaper, “Securing Elliptic Curve Cryptocurrencies against Quantum Vulnerabilities: Resource Estimates and Mitigations”. Authored by Ryan Babbush and Craig Gidney at Google Quantum AI, alongside Thiago Bergamaschi (UC Berkeley), Justin Drake from the Ethereum Foundation, and Dan Boneh from Stanford. Google also published a blog post on the responsible disclosure methodology.

Let me give you some background about some of the authors so you can frame this contribution in the state-of-the-art.. Justin Drake is one of the primary researchers at the Ethereum Foundation responsible for Ethereum’s data-availability roadmap, he was a key architect behind EIP-4844 and the KZG trusted setup ceremony. Dan Boneh is a professor of computer science at Stanford, co-director of the Stanford Security Lab, and co-author of the most widely used applied cryptography textbook in the field. His free online cryptography course has been taken by over half a million people, and some of his papers were key for the development of Filecoin (another one that hits home). Finally, Craig Gidney has been responsible for a lot of the recent progress in the intersection of quantum and AI. You can imagine the weight that claims from these people can have in their respective fields. He published a paper in May 2025 showing RSA-2048 breakable with under 1 million physical qubits, down from 20 million in his own 2019 estimate.

On the other hand, the Oratomic paper, “Shor’s algorithm is possible with as few as 10,000 reconfigurable atomic qubits”, comes from Oratomic, a neutral-atom quantum computing company out of Pasadena, with John Preskill (Caltech) and Dolev Bluvstein as co-authors. Crucially, the Google whitepaper cites the Oratomic circuits as its own input, the two papers are cross-linked and share the same circuit design.

The papers present two circuit variants for attacking secp256k1:

  • Circuit 1: ≤1,200 logical qubits, ≤90 million Toffoli gates

  • Circuit 2: ≤1,450 logical qubits, ≤70 million Toffoli gates

Translated to physical hardware using surface codes on a superconducting architecture (planar degree-4 connectivity, consistent with Google’s Willow-class chips): fewer than 500,000 physical qubits. The previous best estimate, Litinski (2023), put this at roughly 9 million physical qubits. Google just moved that needle by nearly 20-fold.

That reduction didn’t come from a hardware breakthrough, it came from a better circuit. Running Shor’s on ECDLP isn’t just “run the algorithm” (this is somethign I learnt the hard way the first time I was tinkering with Qiskit and IBMs quantum computers). The core computation is elliptic curve point multiplication, computing k·G for arithmetic on secp256k1, which Shor’s algorithm needs to evaluate in quantum superposition as part of its period-finding routine. That means implementing modular arithmetic (specifically Montgomery multiplication, the standard technique for efficient modular operations) entirely in reversible quantum gates.

Every classical arithmetic operation has to be “uncomputed” after use to avoid accumulating garbage qubits that would corrupt the superposition. The dominant cost is Toffoli Gates and there are hundreds of millions of them in a naively constructed circuit.

Prior work optimised either qubit count or gate count, but not both simultaneously. The relevant figure of merit for real hardware is spacetime volume, i.e. the product of qubits × gates × cycle time, because that’s what determines wall-clock runtime on an actual machine.

Google’s contribution is a circuit that achieves the best spacetime volume ever published for ECDLP-256, through two main improvements. First, they applied improved windowing to Montgomery multiplication: rather than processing one bit of the scalar at a time, they process wider windows, amortising the Toffoli cost across more bits per round, reducing the total gate count substantially.

Second, they revised the T-state factory overhead: magic state distillation (the process for producing the high-fidelity ancilla states that Toffoli gates consume) is the dominant physical qubit cost in any surface-code implementation, and prior estimates were conservative. More careful accounting of distillation factory layout and scheduling cut the physical qubit estimate significantly. The combination brought the spacetime volume down far enough to halve the physical qubit requirement relative to Litinski 2023, and Litinski 2023 had already improved substantially on everything before it.

But before going any further I think is worth stressing the distinction between logical and physical qubits and why this matters. Theoretical qubits are what algorithms assume, perfect, noiseless two-state quantum systems. Logical qubits are error-corrected abstractions built from many physical qubits using a quantum error-correcting code (typically a surface code, I have to admit that loving information theory this field of error-corrected qubits is one that I am fascinated about. I actually leverage some of these error-corrected algorithms for my patent).

Physical qubits are the actual noisy hardware. Today’s devices operate at error rates around 10^-3 per gate, which means you need roughly 1,000 physical qubits to sustain one reliable logical qubit. The overhead varies by architecture and target error rate, but it’s the dominant cost in any near-term hardware plan.

To put the current state in perspective: Google’s Willow chip has 105 physical qubits. IBM’s Condor processor reached 1,121 qubits in late 2023, the largest superconducting qubit count to date, though not all at useful error rates. The gap between today and 500,000 error-corrected qubits is still enormous. But the conceptual threshold has moved, and it’s moved faster than almost anyone expected.

The two papers cover different hardware architectures, and the distinction matters. Superconducting qubits, the technology behind Google Willow and IBM’s quantum systems, encode quantum information in tiny circuits cooled near absolute zero (i.e close to 0 Kelvins), where electrical resistance vanishes and quantum effects dominate. Gate operations run in nanoseconds to microseconds. Neutral-atom architectures, like those used by Oratomic, trap individual atoms using focused laser beams and manipulate their quantum states optically. They achieve extremely long coherence times and flexible qubit connectivity, but gate operations are around 1000x slower). Ion trap systems (IonQ, Quantinuum) work on similar principles: individual ions levitated in electromagnetic fields and controlled with lasers. IonQ’s Forte system currently achieves around 29 “algorithmic qubits”, roughly the effective logical qubit count after accounting for noise. The Oratomic team reported 6,100 coherent atomic qubits trapped, with fault-tolerant operations demonstrated below the error threshold on around 500 qubits.

The Oratomic result is the more striking one in raw qubit count: the same computation runs with as few as 10,000–26,000 qubits on neutral-atom hardware. The catch: at current clock speeds (around 1ms/cycle), runtime is close to 10 days, not minutes. That limits the attack to at-rest targets, long-dormant wallets that have been sitting on-chain for years, not live transaction interception.

That clock speed difference is one of the genuinely novel framings in these papers. Superconducting hardware runs gate cycles in microseconds; neutral atoms and ion traps are 100–1,000x slower. This determines which kind of attack is feasible. The papers define three categories: on-spend (race Bitcoin’s block clock before the transaction confirms), at-rest (target publicly exposed keys on dormant wallets), and on-setup (recover secrets from one-time cryptographic ceremonies like KZG). Fast-clock architectures enable on-spend. Slow-clock ones are limited to the other two.


The ZKP disclosure 😱

Here’s the part that really blew my mind about Google’s whitepaper (and that I think justifies even more having Justing Drake and than Dan Boneh around for the paper). Google did not publish the attack circuits. Instead, they published a zero-knowledge proof that the circuits work.

The attack circuit, a sequence of quantum gate operations implementing Shor’s algorithm for secp256k1, was written as an ordinary Rust code using a quantum circuit library that models qubits, gates (Hadamard, CNOT, Toffoli, phase rotation), and multi-qubit arithmetic operations. The program encodes the Montgomery modular multiplication routine at the core of the elliptic curve group arithmetic, the quantum Fourier transform used for period extraction, and the bookkeeping that wires those components into a complete Shor’s instance for ECDLP-256. The circuit itself is a classical description of a quantum computation, a directed graph of gate operations to be executed on hardware. It’s the blueprint, not the machine. (sidenote: the circuit of the image is the classical implementation of Shor’s algorithm for those of you that haven’t seen one ever).

That Rust program was then fed into SP1, a zero-knowledge virtual machine built by Succinct Labs which targets the RISC-V architecture. For those unfamiliar with ZK-VMs, SP1 compiles Rust to RISC-V bytecode (using the standard RISC-V target), and then generates a cryptographic proof, specifically a STARK-based proof, that a given RISC-V program was executed correctly on specific inputs and produced a specific output. You get a proof of correct execution without anyone needing to see the program or the inputs.

In this case: Google ran the circuit program against 9,000 randomly sampled secp256k1 input points, verified that the circuit correctly performs the elliptic curve operations it claims to, and had SP1 generate a proof of that execution. The SHA-256 hash of the circuit was committed publicly so anyone can verify they’re talking about the same circuit. The SP1 proof attests: “this hash corresponds to a program that, when run on these inputs, produces these outputs consistently with a correct Shor’s implementation for ECDLP-256.”

The inner SP1 proof is a STARK. STARKs have no trusted setup, but they’re large, hundreds of kilobytes to megabytes. So SP1 wraps the STARK in an outer Groth16 SNARK. Groth16 takes the STARK proof as a statement to be proved and generates a compact proof of it: roughly 200 bytes, regardless of the complexity of the original computation. The final artefact, code and proof, sits on Zenodo. Anyone can download it and verify Groth16’s 200-byte proof in milliseconds, without ever seeing the attack circuit.

What this means practically: the existence and correctness of the attack is publicly verifiable. The attack tool itself is not.

This is a genuinely new move in responsible disclosure. The standard practice for software vulnerabilities is to notify the vendor, wait a window, then publish. But there’s no vendor to notify here, no patch to deploy in 90 days. So Google found a different answer: prove the result is real, withhold the exploit.

Here’s where it gets funny, or uncomfortable, depending on your perspective. Groth16 is itself an elliptic curve construction. It operates over BN254, a pairing-friendly curve distinct from secp256k1, but it is still fundamentally an elliptic curve scheme. The pairings that make Groth16 work rely on the same class of hard problems, discrete logarithms on elliptic curves, that Shor’s algorithm can break. So Google used a cryptographic primitive that is also eventually threatened by sufficiently powerful quantum computers to prove the existence of the circuit that threatens elliptic curve cryptography. If CRQCs (Cryptographically Relevant Quantum Computers, the term the whitepaper uses for machines capable of running these attacks) ever arrive at scale, Groth16 and the broader ZKP ecosystem go down with the rest.

I don’t know if that’s elegant or just funny. Probably both.

But what is even crazier to me is that this could become eventually the standard model for future research and proprietary algorithms, where companies and researchers can show that “their algorithms do what they claim to be doing” without leaking anything about its underlying implementation. That’s enough for a post of itself. I’ve been saying it for a while but ZKP primitives can have immediate use outside of blockchain networks and web3.


Post-quantum cryptography: what exists, what migration looks like

To understand why certain cryptographic schemes survive a quantum computer and others don’t, we need to understand why Shor’s algorithm works in the first place.

Shor’s algorithm is a period-finding machine. It exploits the fact that ECDLP and integer factorisation both reduce to finding the period of a function defined over a cyclic algebraic group. Quantum Fourier transforms make period-finding tractable on cyclic structures, and that’s the attack. The quantum speedup isn’t general; it’s specific to problems with this periodic structure. If you pick a hard problem that doesn’t have it, Shor’s doesn’t help.

That’s exactly what post-quantum cryptography does.

Lattice problems, specifically the Shortest Vector Problem (SVP) and its structured variant, Module Learning With Errors (MLWE), ask you to find the shortest non-zero vector in a high-dimensional lattice, or to distinguish a structured equation system from a random one. Neither problem has a cyclic group structure Shor’s can exploit. The best known quantum algorithm for SVP offers only a polynomial speedup over classical approaches, not the exponential gap that Shor’s gives against ECDLP.

SVP is NP-hard in the worst case, and lattice cryptography has an elegant property: worst-case hardness reduces to average-case hardness, which makes the security proofs unusually strong. The specific structured variants used in practice (MLWE, MSIS) sit slightly off the worst-case problem, so ongoing cryptanalysis remains active, but no quantum attack comes close to breaking them.

Hash-based problems rest on collision resistance alone. There is no algebraic structure, no group, no lattice. If SHA-256 or SHAKE-256 resist collision attacks, and there’s no known quantum or classical attack that breaks them, the scheme is secure. Grover’s algorithm gives a quadratic speedup for unstructured search, which halves the effective security level (256-bit security becomes 128-bit), but doubling the output size restores it. That’s a parameter choice, not a structural break.

Code-based problems, specifically the Syndrome Decoding Problem, ask you to find a codeword in a random linear error-correcting code given a corrupted version. Berlekamp showed in 1978 that SDP is NP-complete in the worst case. No quantum speedup beyond polynomial is known. The cost has historically been large key sizes (around 1MB for McEliece-based schemes), but newer constructions have reduced this substantially.

The NIST post-quantum standards (i.e. list of post-quantum standards so far accepted by NIST) are a portfolio of bets across those three problem families:

  • ML-KEM (FIPS 203), key encapsulation, formerly CRYSTALS-Kyber. Lattice-based (MLWE). FIPS-finalised, production-ready.

  • ML-DSA / Dilithium (FIPS 204), digital signatures. Lattice-based (MLWE/MSIS). Signature size: ~2.5KB. FIPS-finalised, production-ready.

  • SLH-DSA / SPHINCS+ (FIPS 205), stateless hash-based signatures. Signature size: ~8KB. FIPS-finalised. Heavy but the most conservative security assumption available.

  • HQC, selected March 2025 as fifth KEM, full standard expected 2027. Code-based (syndrome decoding). Smaller keys than McEliece.

And why not migrate immediately to these primitives. The main issue rests in the size of the keys, that can mean breaking a lot of assumptions in some systems (including blockchain networks). Post-quantum keys can be 100-fold larger than existing ECDSA and even RSA keys.


Has the timeline really changed?

What about all of this claims and the statement in Google’s paper about this discovery making them “reevaluate” current quantum supremacy timelines? My immediate answer would be, “who knows?”

Here’s one thing that I think some people may be missing when reading this results: the dramatic reduction in resource counts is real, but the practical problem is not about how many qubits you need on paper. It’s about whether you can build qubits good enough to make those counts mean anything.

The Google whitepaper assumes a physical gate error rate of 10^ 3 sustained uniformly across all qubits. That’s the modelling assumption. Where is hardware today?

The state of the art, as of 2024, is two-qubit gate fidelity of ~99.9%, which is exactly 10^ -3. Multiple groups have now reported this number, including Google with Willow. So you might conclude the assumption is already met. Scott Aaronson (you probably remember him as being my favourite computer scientist alive :) ), who has been tracking this more carefully than most, made exactly this point in September 2024:

“Within the past year, multiple groups have reported 99.9% [two-qubit gate fidelity]. I’m now more optimistic than I’ve ever been that, if things continue at the current rate, either there are useful fault-tolerant QCs in the next decade, or else something surprising happens to stop that.”

But he also noted that 99.99%, a full order of magnitude better, is what you really need for sustained fault-tolerant operation where error correction delivers a net gain rather than just breaking even. That threshold hasn’t been reached.

There’s a version of the coverage that reads these papers as evidence the timeline itself has shortened. I don’t think that’s right, and the distinction matters. What these papers changed is the target: the number of qubits and gates required on paper to run the attack. What they didn’t change is the distance to that target, which is determined entirely by hardware, and hardware hadn’t moved much this past month. The Willow chip had the same error rates the day after the whitepaper dropped as it did the day before. A more efficient attack circuit doesn’t build better qubits. It lowers the bar you need to clear, but if you can’t clear the bar yet, lowering it isn’t the same as getting closer.

More critically: those fidelity numbers are measured on the best qubit pairs on a 100-qubit chip under carefully optimised conditions. Nobody has demonstrated 99.9% gate fidelity sustained uniformly across a million physical qubits.

Google’s own Willow error correction paper, the paper that demonstrated below-threshold surface code performance for the first time, achieved that milestone on 101 physical qubits. The target for a cryptographically relevant attack is somewhere between 500,000 and 1 million. The Willow paper itself notes that logical performance is limited by rare correlated error events, roughly once per hour, that fall outside the standard noise model fault-tolerance proofs assume. At million-qubit scale, the frequency and character of those events is unknown.

Then there’s inter-chip communication. Gidney’s estimates assume a planar grid of qubits with nearest-neighbour connectivity. At the million-qubit scale, that means stitching together many chips into a coherent quantum system, something that has not been demonstrated anywhere. Aaronson again: “eventually you’ll need communication of qubits between chips, which has yet to be demonstrated.”

There’s still a sentence near the end of the whitepaper that I think frames the risk correctly:

“It is conceivable that the existence of early CRQCs may first be detected on the blockchain rather than announced.”

That’s the authors acknowledging a tail scenario the “Nassim Taleb-way”: a nation-state or well-funded private effort builds this quietly, and the first public evidence of success is unexplained large wallet drains on-chain (my good friend Marko Vukolic always said that Bitcoin and Satoshi’s wallet was the biggest quantum computing bounty available, so this claim adds up).

So the honest position is: the resource count dropped dramatically, and that matters. But the real question for the timeline isn’t how many qubits you need on paper, it’s whether anyone can build a million qubits that are actually good enough.

We’ll have to wait and see… Until next week!

Read the whole story
miohtama
5 days ago
reply
Helsinki, Finland
Share this story
Delete

Which one are you? šŸ˜† I am definitely feeling the tea + anxiety one right now.

1 Share

chibird:

Which one are you? šŸ˜† I am definitely feeling the tea + anxiety one right now.

Chibird storeĀ |Ā Positive pin clubĀ | Instagram

Read the whole story
miohtama
182 days ago
reply
Helsinki, Finland
Share this story
Delete

Short Walk

1 Share

Short Walk

And so begins my annual week of pirate chickens, leading up to September 19th’s Talk Like A Pirate Day!

Read the whole story
miohtama
206 days ago
reply
Helsinki, Finland
Share this story
Delete

David Splinter on how much tax billionaires pay

1 Share

Here is his comment on the paper presented here:

Summary:Ā The U.S. tax system is highly progressive. Effective tax rates increase from 2% for the bottom quintile of income to 45% for the top hundredth of one percent. But rates may be lower among those with the highest wealth. This comment starts with the “top 400” tax rate estimates by wealth in Balkir, Saez, Yagan, and Zucman (2025, BSYZ), and adjusts these to account for Forbes family wealth being spread across multiple tax returns, to avoid double-counting capital income, to include missing taxes, and to apply standard tax and income definitions. This results in “top 400” effective tax rates exceeding overall tax rates by 13 percentage points. Still, the “top 400” tax rate is lower than for the top hundredth of one percent, suggesting a modest decline in effective tax rates at the very top when ranking by wealth. However, this is an unsurprising deviation from progressive rates because the tax system targets income, not wealth. Compared to the annual estimates in BSYZ, longer-run estimates are more appropriate for top wealth groups, which have volatile wealth and concentrate charitable giving into end-of-life bequests. End-of-life giving suggests long-run top 400 effective tax-and-giving rates could exceed 75%.

The full link.

The post David Splinter on how much tax billionaires pay appeared first on Marginal REVOLUTION.

Read the whole story
miohtama
227 days ago
reply
Helsinki, Finland
Share this story
Delete

Blue In Mississippi

1 Share

Bluesky’s decision to drop out of the Mississippi market, while understandable, is a clear sign that we cannot take our digital access for granted.

Blue In Mississippi

When I first got on the internet in the mid-’90s, much of my interest in tech was driven by the access to resources I did not immediately have access to through traditional means.

One of the first ways I used the internet was by dialing up a Free-Net line, getting in the Lynx browser, and seeing how far I could get. Often, the sysop kicked me off within about five minutes, but sometimes I managed to get pretty far.

As I could not type in any web addresses myself, my goal was to keep clicking until I found something that looked like the ā€œrealā€ internet. Sometimes, I got lucky.

I was 13. It changed my life.

But a new law recently passed in Mississippi, like that passed in the U.K., wants to limit who can access the social media platforms to those willing to share their identity, and the cited reason for doing so is kids. (The law was named for a teen who committed suicide as the result of a catfishing incident, and passed with bipartisan support. A small tug at heartstrings can do a lot of damage.)

With a court ruling letting the law go through for now, despite the fact that it is likely to fail on First Amendment grounds in a higher court, Bluesky has decided not to test the legal system.

On Friday, it announced that it would no longer operate in the state—creating the perfect example of a patchwork of laws that many worry about. In a blog post announcing the move:

Unlike tech giants with vast resources, we’re a small team focused on building decentralized social technology that puts users in control. Age verification systems require substantial infrastructure and developer time investments, complex privacy protections, and ongoing compliance monitoring—costs that can easily overwhelm smaller providers. This dynamic entrenches existing big tech platforms while stifling the innovation and competition that benefits users.

We believe effective child safety policies should be carefully tailored to address real harms, without creating huge obstacles for smaller providers and resulting in negative consequences for free expression. That’s why until legal challenges to this law are resolved, we’ve made the difficult decision to block access from Mississippi IP addresses. We know this is disappointing for our users in Mississippi, but we believe this is a necessary measure while the courts review the legal arguments.

… You?
Sponsored By … You?

If you find weird or unusual topics like this super-fascinating, the best way to tell us is to give us a nod on Ko-Fi. It helps ensure that we can keep this machine moving, support outside writers, and bring on the tools to support our writing. (Also it’s heartening when someone chips in.)

We accept advertising, too! Check out this page to learn more.

bafkreiah2kgmaxc5eyzayw2hawy5ngs2um353snkyojy2b6qysccwaxbem.jpg
Bluescreen47 shared a message that now appears when they try to load Bluesky in Mississippi.

Worst of all, users now get a message then they have a Mississippi IP and try to load Bluesky.

The message might look strangely familiar to anyone who lives in a state where certain adult content providers have left. And yes, it is very likely that people will use the same methods that those users rely on to access those adult sites. VPNs blew up in popularity in the U.K. recently as a result of its Online Safety Act.

The Bluesky blog noted that the Mississippi law is even more extreme than the Online Safety Act, if that is even possible:

Mississippi’s new law and the UK’s Online Safety Act (OSA) are very different. BlueskyĀ followsĀ the OSA in the UK. There, Bluesky is still accessible for everyone, age checks are required only for accessing certain content and features, and Bluesky does not know and does not track which UK users are under 18. Mississippi’s law, by contrast, would block everyone from accessing the site—teens and adults—unless they hand over sensitive information, and once they do, the law in Mississippi requires Bluesky to keep track of which users are children.

At least one publication in the state, the Mississippi Free Press, has a very large platform on Bluesky, and they may be at risk of losing that megaphone, which puts a focus on the issues facing their state.

ā€œAs a nonprofit publication, we do not take positions on specific legislation or laws,ā€ editor Ashton Pittman wrote. ā€œBut whatever the Mississippi Legislature’s intent, we now find ourselves in a place where we are now having to grapple with how to ensure we can stay connected with all of our readers, many of whom follow us on Bluesky.ā€

The internet will grow around the law, like it always does

I have a feeling that what may happen as a result of all of this is that we will gradually begin to decentralize more. Part of the reason big platforms full of user-generated content are targets is because they create a fulcrum that lawmakers can target.

They have figured out that putting a lean on a few key issues will make it easier to restrict information online, because they are hard to compete against in the court of public opinion.

The good news is that, like the fediverse, Bluesky has stronger underpinnings than your average social network. You don’t have to use the Bluesky client. Nor do you have to live on the Bluesky network to communicate with the AT Protocol. External clients could help in a big way.

I asked Anuj Ahooja, the CEO and executive director of A New Social, a nonprofit working to build distributed social technology that works across protocols, what he thought of this situation. He says this moment could be an opportunity to ā€œto decouple peoples’ networks from platforms.ā€

ā€œWhen platforms make decisions that have an impact on peoples’ livelihood, whether forced or not, people should be able to escape without having to leave their communities behind,ā€ he says.

Bounce-Screenshot-2.jpg
A sample of the Bounce tool, which will make it possible to port followers across platforms.

On top of the nonprofit’s stewardship of the popular Bridgy Fed tool, the organization is in the midst of launching its Bounce tool, which makes it possible to migrate social graphs between Mastodon and Bluesky. For media outlets at risk of getting disconnected from an important platform, for example, this could be a powerful tool.

ā€œThe combination of Bridgy Fed and Bounce gives them an exit strategy that lets them preserve their existing communities and relationships,ā€ he adds.

You do not have to use Bluesky to talk to Bluesky, and in a moment like this, that’s a superpower. You can use external clients. You can host your server somewhere other than the Bluesky network. An escape hatch is possible—and companies like Blacksky are already cropping up to help make this hatch a reality.

Other social networks need to take notes, as they will need technical infrastructure to survive this moment. Even if the bug in Mississippi gets squashed, someone else will make another one. We need to be prepared for any situation as it arises.

In an age of platforms, we have constantly let the desire to make money and get reach take a backseat to actually owning our infrastructure. It’s moments like this that offer excellent reminders why that matters.

As I wrote last year, the real opportunity for Bluesky was not in the social network but the opportunities it opened up for tools to be built on this protocol. If played correctly, this could be the moment where those opportunities flourish.

But I have to admit: I worry about the 13-year-old, still trying to figure out who they are, getting online and having a huge section of the digital discourse blocked off to them. That makes it harder to figure out who they really are or what they care about.

I don’t want to sound like too much of a downer here but we need to be realistic. The internet as we know it is at risk, and just as fundamental protocols enabled it, fundamental protocols will save it.

Maybe those 13-year-olds will get clever like I did and accidentally find a way to reach the good parts of the internet. Laws like the ones in Mississippi tend to forgot that those exist.

Blue Links

To those of you who had to throw out radioactive shrimp this week: Hey, at least you got a good story out of it.

The best parody videos are the ones parodying very old YouTube videos you’ve likely forgotten about. Such is the case of this comedic clip of a professor having a mental breakdown because his class cheated on an exam, based on a 14-year-old clip of a professor dressing down his class for 15 minutes. (It has 16 million views.)

I’m honestly kind of sick of Substack at this point, but their App Store policy sets a really dangerous, but subtle precedent. Because people can now pay for a subscription through the App Store, it is now harder for Substack publishers to leave. Isabelle Roughol has the explainer.

--

Find this one an interesting read? Share it with a pal. And to the folks in Mississippi suddenly without easy access to Bluesky, there is a path forward here—just keep that in mind.

Read the whole story
miohtama
230 days ago
reply
Helsinki, Finland
Share this story
Delete

An Overview of Market Making Strategies in Crypto: Architecture Design and FMZ Implementation of the Self-Matching Trading Strategy

1 Share

⚠ Important Disclaimer

This article demonstrates a volume-boosting self-matching trading strategy intended solely for learning purposes related to trading system architecture. It fundamentally differs from traditional arbitrage or market-making strategies. The core idea of this strategy is to simulate volume through same-price buy-sell matching to obtain exchange rebates or tier-based trading fee discounts, rather than profiting from market inefficiencies.

The provided code is only a reference framework and has not been tested in live trading environments. The strategy implementation is purely for technical study and research, and has not been thoroughly validated under real market conditions. Readers must perform extensive backtesting and risk evaluation before considering any real-world application. Do not use this strategy for live trading without proper due diligence.


In the cryptocurrency market, market-making strategies serve not only to improve liquidity and facilitate trading, but also form a core component of many quantitative trading systems. Market makers quote both buy and sell prices, provide liquidity, and capture profits under various market conditions. Professional market-making systems are often highly sophisticated, involving ultra-low-latency optimization, advanced risk management modules, and cross-exchange arbitrage mechanisms. In this article, we explore the basic concept behind a volume-boosting self-matching trading strategy and demonstrate how to implement a simplified educational framework on the FMZ Quant Trading Platform.

The main content is adapted from the original work “Market Making Strategy: Concepts and Implementation” by Zinan, with some optimizations and adjustments. While some coding practices may now appear outdated, this version recreated on the FMZ platform still offers valuable insights into the structure of trading algorithms and the fundamentals of self-matching trading logic.

Concept of Market Making Strategies

A Market Making Strategy refers to a trading approach where a trader (market maker) places both buy and sell orders in the market simultaneously, thereby providing liquidity and contributing to market stability. This strategy not only helps maintain market depth but also offers counterparties for other traders. By quoting buy and sell prices across various price levels, market makers aim to profit from market fluctuations.

In the context of cryptocurrency markets, market makers play a critical role—especially in markets with low trading volume or high volatility. By offering liquidity, market makers help reduce slippage and make it easier for other traders to execute orders at favorable prices.

The core principle of traditional market-making strategies lies in capturing the bid-ask spread by providing liquidity. Market makers post buy orders at lower prices and sell orders at higher prices, profiting from the difference when orders are filled. For example, when the spot price in the market rises, market makers sell at a higher price and buy at a lower price, earning the difference. The primary sources of income include:

  • Bid-ask spread:Ā Traditional market makers earn profits by placing limit buy and sell orders and capturing the price difference between the bid and ask.
  • Volume-based incentives:Ā The profitability of market makers is closely tied to the trading volume they provide. Higher volume not only leads to more frequent order fills and profit opportunities but also unlocks additional benefits:
    1).Ā Fee rebates:Ā Many exchanges offer fee rebates to incentivize liquidity provision. In some cases, makers receive negative fees—meaning the exchange pays the market maker for executed trades.
    2).Ā VIP tier discounts:Ā Reaching specific volume thresholds may qualify market makers for lower trading fees, reducing operational costs.
    3).Ā Market maker incentive programs:Ā Some exchanges run dedicated incentive programs that reward market makers based on the quality and consistency of their liquidity provision.

However, market makers also face significant market risk, especially in highly volatile environments such as the cryptocurrency space. Rapid price swings may cause market makers’ quoted prices to deviate significantly from actual market conditions, potentially resulting in losses.

Types of Market Making Strategies

In the cryptocurrency market, market makers typically choose different strategies based on market conditions, trading volume, and volatility. Common types of market-making strategies include:

  • Passive market making:Ā In this approach, the market maker places buy and sell limit orders based on market depth, historical volatility, and other indicators, then waits for the market to fill those orders. This strategy is characterized by low frequency and a conservative risk profile, relying on natural market movements to generate profit.
  • Active market making:Ā Active market makers dynamically adjust order prices and sizes in real time based on market conditions to improve execution probability. They often place orders close to the current mid-price, aiming to better capture opportunities from short-term volatility.
  • Volume-boosting self-matching strategy:Ā The focus of this article. A volume-boosting self-matching strategy involves placing simultaneous buy and sell orders at the same price to artificially increase trading volume. Unlike traditional market-making, this strategy is not designed to profit from bid-ask spreads. Instead, it seeks to benefit from exchange incentives such as fee rebates, VIP tier discounts, or liquidity mining rewards.

In a volume-boosting self-matching strategy, the market maker posts buy and sell orders at the same price level. While these trades do not generate profits from price differentials, they quickly accumulate trading volume. Profitability is entirely dependent on the exchange’s incentive mechanisms rather than market inefficiencies or arbitrage.

Key characteristics:

  • Same-price orders:Ā Unlike traditional market-making, buy and sell orders are placed at the same price, eliminating spread-based profits.
  • Volume-oriented execution:Ā The primary objective is to accumulate trade volume rapidly, not to exploit price arbitrage.
  • Incentive-driven profitability:Ā Returns are fully reliant on exchange incentives such as fee rebates, VIP tier benefits, or dedicated market maker reward programs.

Important distinction:
Compared to traditional market-making strategies, volume-boosting self-matching strategies do not generate profit by providing genuine market liquidity. Instead, they create trading volume artificially to capitalize on policy-driven rewards offered by exchanges. This type of strategy may carry regulatory or compliance risks in certain jurisdictions and must be carefully evaluated before any live trading deployment.

Profit Logic Behind Volume-Boosting Self-Matching Strategies

Upon analyzing the code implementation, it becomes evident that the buy and sell prices are exactly the same in this strategy:

defmake_duiqiao_dict(self, trade_amount):
    mid_price = self.mid_price  # Med price
    trade_price = round(mid_price, self.price_precision)  # Accurate transaction price
    trade_dict = {
        'trade_price': trade_price,  # The same price is used for both buying and selling'amount': trade_amount
    }
    return trade_dict

Actual Profit Mechanism

1. Volume-boosting via self-matching

  • The core objective of this strategy is to increase trading volume rapidly through high-frequency self-matching.
  • Profit is derived from exchange incentives such as fee rebates, VIP tier upgrades, or liquidity mining rewards.
  • This approach is applicable on exchanges that offer formal market maker incentive programs.

2. Fee rebate mechanism

  • The strategy relies on exchanges offering negative maker fees (maker rates are negative)
  • By providing liquidity, the strategy earns rebates on filled orders.
  • It requires the exchange to support market maker fee discounts or rebate structures.

Suitable Scenarios & Associated Risks

āœ… Suitable scenarios

  • Exchanges with clear market maker rebate or incentive policies.
  • Traders aiming to meet high trading volume requirements for VIP tier upgrades.
  • Platforms that run liquidity mining or commission rebate campaigns.

āŒ Unsuitable scenarios

  • Exchanges that do not offer fee rebates or incentives.
  • Platforms with high transaction fees, where self-matching leads to net losses.
  • Exchanges that explicitly prohibit wash trading or enforce restrictions on artificial volume.

⚠ Risk warnings

  • If both buy and sell orders are filled simultaneously, the strategy may incur a net loss after fees.
  • Changes in exchange policies may render the strategy unprofitable or non-viable.
  • Continuous monitoring of fee structures and trading costs is required.
  • The strategy may face compliance risks in jurisdictions where volume-boosting trading is regulated or restricted.

Self-Matching Strategy Architecture Analysis

This section presents a simplified implementation of a volume-boosting self-matching strategy, inspired by the framework developed by Zinan. The focus is on how to accumulate trading volume through same-price buy and sell orders within a live exchange environment. The strategy architecture is structured around two main classes: MidClass and MarketMaker. These components are responsible for handling exchange interactions and executing the self-matching logic, respectively.

The architecture follows a layered design, separating the exchange interface logic from the trading strategy logic. This ensures modularity, extensibility, and clean separation of concerns. The main components are:

  1. MidClass: The exchange middle layer is responsible for interacting with the exchange interface to obtain market data, account information, order status, etc. This layer encapsulates all interactions with external exchanges to ensure that the trading logic and the exchange interface are decoupled.
  2. MarketMaker: The market making strategy class is responsible for executing the cross-trading strategy, generating pending orders, checking order status, updating strategy status, etc. It interacts with the exchange middle layer to provide specific market making and self-matching trading operations.

MidClass

MidClass is the middle layer of the exchange. Its main responsibility is to handle the interaction with the exchange, encapsulate all external API calls, and provide a simple interface for MarketMaker to use. Its architecture includes the following key functions:

1. Market data retrieval:
Fetches real-time market data such as tickers, order book depth, and candlestick data (K-lines). Regular updates are essential to ensure the strategy operates on up-to-date information.

2. Account information management:
Retrieves account data, including balances, available margin, and open positions. This is critical for capital allocation and risk control.

3. Order management:
Provides functionality to place, query, and cancel orders. This is the foundation of executing a market-making or self-matching strategy and ensures robust control over open orders.

4. Data Synchronization:
Maintains persistent connections with the exchange and updates internal state for use by the strategy layer.

By encapsulating these features in MidClass, the strategy logic within MarketMaker remains focused on execution rather than infrastructure. This structure improves the maintainability and scalability of the system, making it easier to add support for different exchanges or optimize existing functions.

MarketMaker

MarketMaker is the core class of the self-matching strategy, responsible for executing market-making operations and handling self-matching trades. Its architecture typically includes the following key modules:

1. Initialization:

  • Initialize the exchange middleware (MidClass) to retrieve exchange metadata such as trading pair details, precision, tick size, and order book depth.
  • Initialize the market-making strategy, set up key parameters like order quantity, price spread, and execution intervals. These parameters directly affect how the strategy behaves and performs in the market.

2. Data refresh:

  • Periodic market data updates, including real-time account information, market price, depth, K-line, etc. These data provide basic information for executing strategies.
  • The frequency of updates can be dynamically adjusted based on market volatility to ensure timely responses to market changes.

3. Self-matching execution logic:

  • Order book construction: Based on current market depth and price dynamics, construct a dictionary of orders (both bids and asks) with specified price and size. This is typically calculated using predefined strategy parameters.
  • Self-matching execution: Once the order structure is ready,Ā MarketMakerĀ submits both buy and sell orders at the same price level to the market. The goal is to accumulate trade volume quickly via same-price order matching.
  • Order status monitoring: During execution,Ā MarketMakerĀ will check the status of the order constantly to ensure that the pending order can be processed in time. If the order fails to be executed, it will adjust the pending order price or quantity until the order is completed.

4. Strategy state update:

  • Strategy status update: Regularly update key performance indicators such as cumulative trading volume, filled order count, and total fees. These metrics allow real-time tracking of the strategy’s performance.
  • Dynamic risk management: The strategy adapts its behavior based on current market conditions. MarketMaker can modify execution logic in real time to reduce risk and maintain operational efficiency across varying market environments.

Self-Matching Strategy Logic Reconstruction

The implementation of a self-matching strategy relies on precise market data and fast execution. The MarketMaker class monitors real-time market conditions and leverages same-price buy and sell orders (self-matching) to rapidly accumulate trading volume, which is the core objective of this strategy.

Initialization

In the MarketMaker class’s initialization method, the first step is to retrieve the exchange’s precision settings, followed by initializing key strategy parameters such as quantity precision and price precision.

self.precision_info = self.exchange_mid.get_precision()  # Get precision informationself.price_precision = self.precision_info['price_precision']  # Price precisionself.amount_precision = self.precision_info['amount_precision']  # Trading volume precision

Generating the Self-Matching Order Dictionary

At the heart of the self-matching strategy is the construction of an order dictionary containing both buy and sell orders at the same price level, along with their respective quantities. The code generates the dictionary of self-matching trading orders by calculating the middle price.

defmake_duiqiao_dict(self, trade_amount):
    mid_price = self.mid_price  # Mid Price
    trade_price = round(mid_price, self.price_precision)  # Accurate transaction price
    trade_dict = {
        'trade_price': trade_price,
        'amount': trade_amount
    }
    return trade_dict

Executing Self-Matching Trades

According to the generated dictionary of self-matching trading orders, the self-matching trading transaction is executed. In the code, the create_order method of the exchange middle layer is called to place buy orders and sell orders at the same time.

defmake_trade_by_dict(self, trade_dict):
    ifself.position_amount > trade_dict['amount'] andself.can_buy_amount > trade_dict['amount']:
        buy_id = self.exchange_mid.create_order('buy', trade_dict['trade_price'], trade_dict['amount'])  # Pending buy order
        sell_id = self.exchange_mid.create_order('sell', trade_dict['trade_price'], trade_dict['amount'])  # Pending sell orderself.traded_pairs['dui_qiao'].append({
            'buy_id': buy_id, 'sell_id': sell_id, 'init_time': time.time(), 'amount': trade_dict['amount']
        })

Order Status Monitoring

The strategy periodically checks the status of active orders and handles any unfilled or partially filled ones. In the code, this is done by calling the GetOrder method from the exchange middleware (MidClass). Based on the returned order status, the strategy decides whether to cancel the orders. The self-matching order management logic includes the following key steps:

1. Fetching order status:

  • The strategy retrieves the current status of both the buy and sell orders through the exchange API.
  • If the status retrieval fails (e.g., due to a missing order or network issue), the strategy cancels the corresponding order and removes it from the active tracking list.

2. Evaluating order status:

  • The status returned is used to determine whether the order is filled, partially filled, or still open.
  • Typical order status include:
    0(ORDER_STATE_PENDING): Order is open and waiting to be filled.
    1(ORDER_STATE_CLOSED): Order has been completely filled.
    2(ORDER_STATE_CANCELED): Order has been canceled.
    3(ORDER_STATE_UNKNOWN): Order status is unknown or undefined.

3. Handling order status:

  • Both orders unfilled:
    If both buy and sell orders remain unfilled (statusĀ 0), the strategy checks the polling interval (e.g., current_time % 5 == 0) to decide whether to cancel them.
    After cancellation, the strategy updates the order count and removes the orders from the internal record.
  • One order filled, the other unfilled:
    If one side of the self-matching order pair is filled (status == 1) and the other remains unfilled (status == 0), the strategy uses the polling interval condition to decide whether to cancel the unfilled order.
    After cancelling an open order, update the volume and the list of open orders and remove the order from the record.
  • Both orders filled:
    If both the buy and sell orders are fully executed (status == 1), the strategy updates the trade volume counter, and the order pair is removed from the internal tracking list.
  • Unknown order status:
    If the order status is neither 0 nor 1, it is recorded as unknown status and logged.

4. Updating internal records:
After processing the order statuses, the strategy updates the total accumulated trade volume, the list of unfilled or partially filled orders, the order submission and cancellation counters.

Future Strategy Outlook

The self-matching strategy presented in this article is primarily intended as an educational example for understanding the architectural design of trading frameworks. Its practical application in live trading is limited. For readers interested in real market-making strategies, we plan to introduce more advanced and practical models in future content:

1. Order book market-making strategy

  • A true arbitrage-based approach that captures the bid-ask spread.
  • Places limit orders between the best bid and ask to earn the spread profit.
  • Closely aligns with the traditional profit model of professional market makers.

2. Dynamic market-making strategy

  • Adapts quote prices in real-time based on market volatility.
  • Integrates inventory management and risk control mechanisms.
  • Suitable for adaptive execution across varying market conditions.

3. Multi-level market-making strategy

  • Places orders at multiple price levels simultaneously.
  • Diversifies execution risk and enhances overall return stability.
  • Closer to how professional market-making systems operate in production.

These upcoming strategies will emphasize realistic profit logic and robust risk management, providing quantitative traders with more actionable and valuable insights for developing production-ready systems.

Strategy Outlook

Self-matching strategies that rely on exchange incentive policies, such as fee rebates, VIP tier upgrades, or liquidity mining rewards — are inherently vulnerable to changes in those policies. If an exchange adjusts its fee structure or removes such incentives, the strategy may become ineffective or even result in net losses. To mitigate this, the strategy must incorporate adaptability to policy changes, such as dynamic monitoring of fee rates and trading incentives, multiple profit sources to reduce over-reliance on a single incentive. fallback mechanisms or automatic shutdown triggers if profitability thresholds are not met. Moreover, self-matching strategies may raise regulatory red flags, as they can be interpreted as attempts to manipulate market volume. In many jurisdictions, such behavior may violate market integrity laws.

Therefore, traders must stay updated on local legal and compliance requirements, consult with legal professionals when deploying volume-based strategies, avoid practices that could be construed as deceptive or manipulative.

We hope readers use this strategy framework as a foundation to build more robust, compliant, and innovative trading systems. The true value of quantitative trading lies in continuous learning, experimentation, and refinement. May your journey in quant trading be insightful, adaptive, and rewarding!

Strategy Code

import time, json

classMidClass:
    def__init__(self, exchange_instance):
        '''
        Initialize the exchange middle layer
        
        Args:
            exchange_instance: FMZ's exchange structure
        '''self.init_timestamp = time.time()  # Record initialization timeself.exchange = exchange_instance  # Save the exchange objectself.exchange_name = self.exchange.GetName()  # Get the exchange nameself.trading_pair = self.exchange.GetCurrency()  # Get the trading pair name (such as BTC_USDT)defget_precision(self):
        '''
        Get the accuracy information of the trading pair
        
        Returns:
            Returns a dictionary containing precision information, or None on failure.
        '''
        symbol_code = self.exchange.GetCurrency()
        ticker = self.exchange.GetTicker(symbol_code)  # Backtesting system needs
        exchange_info = self.exchange.GetMarkets()
        data = exchange_info.get(symbol_code)

        ifnot data:
            Log("Failed to obtain market information", GetLastError())
            returnNone# Get the accuracy information of the trading pairself.precision_info = {
            'tick_size': data['TickSize'],                  # Price accuracy'amount_size': data['AmountSize'],              # Quantity accuracy'price_precision': data['PricePrecision'],      # Price decimal places precision'amount_precision': data['AmountPrecision'],    # Number of decimal places of precision'min_qty': data['MinQty'],                      # Minimum order quantity'max_qty': data['MaxQty']                       # Maximum order quantity
        }

        returnself.precision_info

    defget_account(self):
        '''
        Get account information
        
        Returns:
            Returns True if the information is successfully obtained, and returns False if the information is failed.
        '''self.balance = '---'# Account balanceself.amount = '---'# Account holdingsself.frozen_balance = '---'# Freeze balanceself.frozen_stocks = '---'# Freeze positionsself.init_balance = Noneself.init_stocks = Noneself.init_equity = Nonetry:
            account_info = self.exchange.GetAccount()  # Get account informationself.balance = account_info['Balance']  # Update account balanceself.amount = account_info['Stocks']  # Update the holdingsself.frozen_balance = account_info['FrozenBalance']  # Update frozen balanceself.frozen_stocks = account_info['FrozenStocks']  # Update frozen positionsself.equity = self.balance + self.frozen_balance + (self.amount + self.frozen_stocks) * self.last_price
            
            ifnotself.init_balance ornotself.init_stocks ornotself.init_equity:
                if _G("init_balance") and _G("init_balance") > 0and _G("init_stocks") and _G("init_stocks") > 0:
                    self.init_balance = round(_G("init_balance"), 2)
                    self.init_stocks = round(_G("init_stocks"), 2)
                    self.init_equity = round(_G("init_equity"), 2)
                else:
                    self.init_balance = round(self.balance + self.frozen_balance, 2)
                    self.init_stocks = self.amount + self.frozen_stocks
                    self.init_equity = round(self.init_balance + (self.init_stocks * self.last_price), 2)
                    _G("init_balance", self.init_balance)
                    _G("init_stocks", self.init_stocks)
                    _G("init_equity", self.init_equity)

                    Log('Obtaining initial equity', self.init_equity)

            self.profit = self.equity - self.init_equity
            self.profitratio = round((self.equity - self.init_equity)/self.init_equity, 4) * 100returnTrueexcept:
            returnFalse# Failed to obtain account informationdefget_ticker(self):
        '''
        Get market price information (such as bid price, ask price, highest price, lowest price, etc.)
        
        Returns:
            Returns True if the information is successfully obtained, and returns False if the information is failed.
        '''self.high_price = '---'# The highest priceself.low_price = '---'# The lowest priceself.sell_price = '---'# Ask priceself.buy_price = '---'# Bid priceself.last_price = '---'# Latest transaction priceself.volume = '---'# Trading volumetry:
            ticker_info = self.exchange.GetTicker()  # Get market price informationself.high_price = ticker_info['High']  # Update highest priceself.low_price = ticker_info['Low']  # Update lowest priceself.sell_price = ticker_info['Sell']  # Update ask priceself.buy_price = ticker_info['Buy']  # Update bid priceself.last_price = ticker_info['Last']  # Update the latest transaction priceself.volume = ticker_info['Volume']  # Update trading volumereturnTrueexcept:
            returnFalse# Failed to obtain market price informationdefget_depth(self):
        '''
        Get depth information (list of pending orders for buy and sell orders)
        
        Returns:
            Returns True if the information is successfully obtained, and returns False if the information is failed.
        '''self.ask_orders = '---'# Ask order listself.bid_orders = '---'# Bid order listtry:
            depth_info = self.exchange.GetDepth()  # Get depth informationself.ask_orders = depth_info['Asks']  # Update the sell order listself.bid_orders = depth_info['Bids']  # Update buy order listreturnTrueexcept:
            returnFalse# Failed to obtain depth informationdefget_ohlc_data(self, period=PERIOD_M5):
        '''
        Get K-line information
        
        Args:
            period: K-line period, PERIOD_M1 refers to 1 minute, PERIOD_M5 refers to 5 minutes, PERIOD_M15 refers to 15 minutes,
            PERIOD_M30 means 30 minutes, PERIOD_H1 means 1 hour, PERIOD_D1 means one day.
        '''self.ohlc_data = self.exchange.GetRecords(period)  # Get K-line datadefcreate_order(self, order_type, price, amount):
        '''
        Submit an order
        
        Args:
            order_type: Order type, 'buy' refers to a buy order, 'sell' refers to a sell order
            price: Order price
            amount: Order amount 
            
        Returns:
            Order ID number, which can be used to cancel the order
        '''if order_type == 'buy':
            try:
                order_id = self.exchange.Buy(price, amount)  # Submit a buy orderexcept:
                returnFalse# Buy order submission failedelif order_type == 'sell':
            try:
                order_id = self.exchange.Sell(price, amount)  # Submit a sell orderexcept:
                returnFalse# Sell order submission failedreturn order_id  # Returns the order IDdefget_orders(self):
        '''
        Get a list of uncompleted orders
        
        Returns:
            List of uncompleted orders
        '''self.open_orders = self.exchange.GetOrders()  # Get uncompleted ordersreturnself.open_orders
    
    defcancel_order(self, order_id):
        '''
        Cancel a pending order
        
        Args:
            order_id: The ID number of the pending order you wish to cancel
            
        Returns:
            Returns True if the pending order is successfully cancelled, and returns False if the pending order is failed.
        '''returnself.exchange.CancelOrder(order_id)  # Cancel the orderdefrefresh_data(self):
        '''
        Refresh information (account, market price, depth, K-line)
        
        Returns:
            If the refresh information is successfully returned, 'refresh_data_finish!' will be returned. Otherwise, the corresponding refresh failure information prompt will be returned.
        '''ifnotself.get_ticker():  # Refresh market price informationreturn'false_get_ticker'ifnotself.get_account():  # Refresh account informationreturn'false_get_account'ifnotself.get_depth():  # Refresh depth informationreturn'false_get_depth'try:
            self.get_ohlc_data()  # Refresh K-line informationexcept:
            return'false_get_K_line_info'return'refresh_data_finish!'# Refresh successfullyclassMarketMaker:
    def__init__(self, mid_class):
        '''
        Initialize market making strategy
        
        Args:
            mid_class: Exchange middle layer object
        '''self.exchange_mid = mid_class  # Exchange middle layer objectself.precision_info = self.exchange_mid.get_precision()  # Get accuracy informationself.done_amount = {'dui_qiao': 0}  # Completed transactionsself.price_precision = self.precision_info['price_precision']  # Price precisionself.amount_precision = self.precision_info['amount_precision']  # Trading volume precisionself.traded_pairs = {'dui_qiao': []}  # Trading pairs with pending ordersself.pending_orders = []  # Uncompleted order statusself.pending_order_count = 0# Number of pending ordersself.buy_amount = 0self.sell_amount = 0self.fee = 0self.fee_rate = 0.08 / 100self.chart = {
            "__isStock": True,
            "tooltip": {"xDateFormat": "%Y-%m-%d %H:%M:%S, %A"},
            "title": {"text": "Number of pending orders"},
            "xAxis": {"type": "datetime"},
            "yAxis": {
                "title": {"text": "Number of pending orders"},
                "opposite": False
            },
            "series": [
                {"name": "Buy order quantity", "id": "Buy order quantity", "data": []},
                {"name": "Sell order quantity", "id": "Sell order quantity", "dashStyle": "shortdash", "data": []}
            ]
        }
    
    defrefresh_data(self):
        '''
        Refresh data (account, market price, depth, K-line)
        '''self.exchange_mid.refresh_data()  # Refresh exchange dataself.position_amount = 0ifisinstance(self.exchange_mid.amount, str) elseself.exchange_mid.amount  # Holding positionsself.available_balance = 0ifisinstance(self.exchange_mid.balance, str) elseself.exchange_mid.balance  # Account balance
        Log('Check ticker', self.exchange_mid.buy_price)
        self.can_buy_amount = self.available_balance / float(self.exchange_mid.buy_price)  # Quantity available for purchaseself.mid_price = (self.exchange_mid.sell_price + self.exchange_mid.buy_price) / 2# Mid Pricedefmake_duiqiao_dict(self, trade_amount):
        
        '''
        Generate a dictionary of self-matching orders
        
        Args:
            trade_amount: Volume per transaction
        
        Returns:
            Dictionary list of self-matching orders
        '''
        Log('3 Create a dictionary for self-matching orders')

        mid_price = self.mid_price  # Mid price

        trade_price = round(mid_price, self.price_precision)  # Accurate transaction price

        trade_dict = {
            'trade_price': trade_price,
            'amount': trade_amount
        }

        Log('Returns the market order dictionary:', trade_dict)
        return trade_dict
    
    defmake_trade_by_dict(self, trade_dict):
        '''
        Execute transactions according to the transaction dictionary
        
        Args:
            trade_dict: transaction dictionary
        '''
        Log('4 Start trading by dictionary')
        self.refresh_data()  # Refresh dataif trade_dict:
            Log('Current account funds: Coin balance: ', self.position_amount, 'Funds balance: ', self.can_buy_amount)
            Log('Check open positions: Coin limit: ', self.position_amount > trade_dict['amount'], 'Funding restrictions: ', self.can_buy_amount > trade_dict['amount'])
            ifself.position_amount > trade_dict['amount'] andself.can_buy_amount > trade_dict['amount']:
                buy_id = self.exchange_mid.create_order('buy', trade_dict['trade_price'], trade_dict['amount'])  # Pending buy order
                sell_id = self.exchange_mid.create_order('sell', trade_dict['trade_price'], trade_dict['amount'])  # Pending sell orderself.traded_pairs['dui_qiao'].append({
                    'buy_id': buy_id, 'sell_id': sell_id, 'init_time': time.time(), 'amount': trade_dict['amount']
                })
                    
                self.last_time = time.time()  # Update last transaction timedefhandle_pending_orders(self):
        '''
        Processing unfulfilled orders
        '''
        pending_orders = self.exchange_mid.get_orders()  # Get uncompleted ordersiflen(pending_orders) > 0:
            for order in pending_orders:
                self.exchange_mid.cancel_order(order['Id'])  # Cancel uncompleted ordersdefcheck_order_status(self, current_time):
        '''
        Check order status
        current_time: Polling check times
        '''
        Log('1 Start order information check')
        Log(self.traded_pairs['dui_qiao'])
        self.buy_pending = 0self.sell_pending = 0for traded_pair inself.traded_pairs['dui_qiao'].copy():
            Log('Check the order:', traded_pair['buy_id'], traded_pair['sell_id'])

            try:
                buy_order_status = self.exchange_mid.exchange.GetOrder(traded_pair['buy_id'])  # Get buy order status
                sell_order_status = self.exchange_mid.exchange.GetOrder(traded_pair['sell_id'])  # Get sell order statusexcept:
                Log(traded_pair, 'cancel')
                self.exchange_mid.cancel_order(traded_pair['buy_id'])  # Cancel buy orderself.exchange_mid.cancel_order(traded_pair['sell_id'])  # Cancel sell orderself.traded_pairs['dui_qiao'].remove(traded_pair)  # Remove orderreturn

            Log('Check the order:', traded_pair['buy_id'], buy_order_status, traded_pair['sell_id'], sell_order_status, [sell_order_status['Status'], buy_order_status['Status']])
            if [sell_order_status['Status'], buy_order_status['Status']] == [0, 0]:
                self.buy_pending += 1self.sell_pending += 1if current_time % 5 == 0:
                    Log('Check pending orders and cancel pending orders (two unfinished)', buy_order_status['Status'], sell_order_status['Status'], current_time % 5)
                    self.exchange_mid.cancel_order(traded_pair['buy_id'])  # Cancel buy orderself.exchange_mid.cancel_order(traded_pair['sell_id'])  # Cancel sell orderself.pending_order_count += 1# The number of pending orders increases by 1self.traded_pairs['dui_qiao'].remove(traded_pair)  # Remove orderelif {sell_order_status['Status'], buy_order_status['Status']} == {1, 0}:
                if buy_order_status['Status'] == ORDER_STATE_PENDING:
                    self.buy_pending += 1if sell_order_status['Status'] == ORDER_STATE_PENDING:
                    self.sell_pending += 1if current_time % 5 == 0:
                    Log('Check pending orders and cancel pending orders (part one is not yet completed)', buy_order_status['Status'], sell_order_status['Status'])
                    self.done_amount['dui_qiao'] += traded_pair['amount']  # Update transaction volumeif buy_order_status['Status'] == ORDER_STATE_PENDING:
                        self.sell_amount += traded_pair['amount']
                        self.fee += sell_order_status['Amount'] * self.fee_rate * sell_order_status['Price']
                        Log('Cancel the buy order and add the unfinished buy list', traded_pair['buy_id'])
                        self.exchange_mid.cancel_order(traded_pair['buy_id'])  # Cancel buy orderself.pending_orders.append(['buy', buy_order_status['Status']])  # Record uncompleted orders
                        Log('Before clearing:', self.traded_pairs['dui_qiao'])
                        Log('Clear id:', traded_pair)
                        self.traded_pairs['dui_qiao'].remove(traded_pair)  # Remove order
                        Log('After clearing:', self.traded_pairs['dui_qiao'])
                    elif sell_order_status['Status'] == ORDER_STATE_PENDING:
                        self.buy_amount += traded_pair['amount']
                        self.fee += buy_order_status['Amount'] * self.fee_rate * buy_order_status['Price']
                        Log('Cancel the sell order and add it to the unfinished sell list', traded_pair['sell_id'])
                        self.exchange_mid.cancel_order(traded_pair['sell_id'])  # Cancel sell orderself.pending_orders.append(['sell', sell_order_status['Status']])  # Record uncompleted orders
                        Log('Before clearing:', self.traded_pairs['dui_qiao'])
                        Log('Clear id:', traded_pair)
                        self.traded_pairs['dui_qiao'].remove(traded_pair)  # Remove order
                        Log('After clearing:', self.traded_pairs['dui_qiao'])
                
            elif [sell_order_status['Status'], buy_order_status['Status']] == [1, 1]:
                Log('Both orders have been completed')
                self.buy_amount += traded_pair['amount']
                self.sell_amount += traded_pair['amount']
                self.fee += buy_order_status['Amount'] * self.fee_rate * buy_order_status['Price']
                self.fee += sell_order_status['Amount'] * self.fee_rate * sell_order_status['Price']
                Log('Completion status:', buy_order_status['Status'], sell_order_status['Status'], traded_pair['amount'])
                self.done_amount['dui_qiao'] += 2 * traded_pair['amount']  # Update transaction volumeself.traded_pairs['dui_qiao'].remove(traded_pair)  # Remove orderelse:
                Log('Two orders are in unknown status:', buy_order_status, sell_order_status)
                Log('Unknown order status:', buy_order_status['Status'], sell_order_status['Status'])
                Log('Unknown order information:', traded_pair)
        
    defupdate_status(self):

        self.exchange_mid.refresh_data()

        table1 = {
            "type": "table",
            "title": "Account information",
            "cols": [
                "Initial funds", "Existing funds", "Self-matching buy amount", "Self-matching sell  amount", "Fee rate", "Total return", "Rate of return"
            ],
            "rows": [
                [   
                    self.exchange_mid.init_equity,
                    self.exchange_mid.equity,
                    round(self.buy_amount, 4),
                    round(self.sell_amount, 4),
                    round(self.fee, 2),
                    self.exchange_mid.profit,
                    str(self.exchange_mid.profitratio) + "%"
                ],
            ],
        }

        LogStatus(
            f"Initialization time: {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(self.exchange_mid.init_timestamp))}\n",
            f"`{json.dumps(table1)}`\n",
            f"Last execution time: {_D()}\n"
        )

        LogProfit(round(self.exchange_mid.profit, 3), '&')

    defplot_pending(self):
        
        Log('Number of knock-on orders:', self.buy_pending, self.sell_pending)
        self.obj_chart = Chart(self.chart)
        now_time = int(time.time() * 1000)
        # Update pending order buy volume dataself.obj_chart.add(0, [now_time, self.buy_pending])
        # Update pending order selling volume dataself.obj_chart.add(1, [now_time, self.sell_pending])


defmain():
    '''
    Main function, running market making strategy
    '''
    exchange.IO('simulate', True)
    exchange.IO("trade_super_margin")
    
    target_amount = 1# Target transaction volume
    trade_amount = 0.01# Volume per transaction
    trade_dict = {}  # Initialize transaction dictionary
    
    exchange_mid = MidClass(exchange)  # Initialize the exchange middle layer
    Log(exchange_mid.refresh_data())  # Refresh data
    market_maker = MarketMaker(exchange_mid)  # Initialize market making strategy

    check_times = 0while market_maker.done_amount['dui_qiao'] < target_amount:  # Loop until the target transaction volume is reached
        Log(market_maker.traded_pairs['dui_qiao'])
        market_maker.check_order_status(check_times)  # Check order status
        Sleep(1000)  # Wait 1 second
        market_maker.refresh_data()  # Refresh dataiflen(market_maker.traded_pairs['dui_qiao']) < 1: # Price moves, market orders are cancelled, wait until all orders are completed, and create a new order dictionary
            
            Log('2 The number of trading pairs on the market is less than 1')
            trade_dict = market_maker.make_duiqiao_dict(trade_amount)  # Generate a dictionary of pending orders
            Log('New trading dictionary', trade_dict)
        
        if trade_dict:  # Check if the dictionary is not empty
            market_maker.make_trade_by_dict(trade_dict)  # Execute a trade

        Log('Market making quantity:', market_maker.done_amount['dui_qiao'])  # Record transaction volume

        market_maker.plot_pending()
        market_maker.update_status()

        check_times += 1
        
    Log(market_maker.position_amount, market_maker.can_buy_amount)  # Record holdings and available purchase quantities
    Log('Existing orders:', exchange.GetOrders())  # Record existing ordersdefonexit():
    Log("Execute the sweep function")

    _G("init_balance", None)
    _G("init_stocks", None)
    _G("init_equity", None)

From: An Overview of Market Making Strategies in Crypto: Architecture Design and FMZ Implementation of the Self-Matching Trading Strategy

Read the whole story
miohtama
241 days ago
reply
Helsinki, Finland
Share this story
Delete
Next Page of Stories