
Release of new HAF API stack 1.28.6 next week
@blocktrades
Posted 6d ago · 8 min read

It has been quite a while since I've made a post, but it doesn't mean the BlockTrades team hasn't been busy, quite the opposite.
The reason I haven't posted is I've been personally swamped with work since I started using Claude Code back in mid November, working around 12-14 hours a day, 7 days a week (something I haven't done since back in my early twenties when I started my first company).
Usually I report on the whole teams work, but it has been a long time since my last report, and it would take a huge post just to summarize what other team members have done. So mostly this is a report on the US programming team's work (which is just me and efrias).
Moving from coding to clauding
Our team had previously been using other AI tools like Github Copilot to assist with coding, but the gains, while substantial, weren't impressive enough to suck me back into full-time (or perhaps more accurately "overtime") programming again.
But then I read an article that recommended Claude Code, and when I mentioned it to efrias, he said he had already bought a personal license and switched to it, and he said it was much better than Copilot, so I decided to give it a try (Copilot was $10/month, Claude Code was $200/month).
Once I started working with Claude Code, I realized just how much we could potentially accelerate our work (10x or more faster), and also what the potential blocker to that was: a slow and flaky software CI (build-test system) that was designed to just barely be fast enough for purely human-based software development, but which completely failed for AI-assisted software development.
Build and test (CI) overhaul
The essential problem was that AI can add a new feature or make a bug fix in 5-15 minutes, but we had tests that ran for up to 90 minutes (and had intermittent fails too which makes it hard to judge whether a code change has caused the problem or it is just a random test fail, which leaves you and the AI particularly confused as to what to do next).
So I spent the next 2 1/2 months overhauling our compile and test software on most of our repos (we have a lot of repos) on https://gitlab.syncad.com, eliminating intermittently failing tests (AKA flaky tests) and reducing the time to build the software and run the tests. The overhauled CI still is far from perfect, but it is much more consistent and considerably faster. Also, based on the type of the change, the CI can decide to do less work, using less computer resources and finishing much faster.
NFS caching of HAF and HAF app replay data
Another big change I did was to add a NFS-based caching for replayed data for haf and haf apps our CI builders. This is not only useful for speeding up test times, it also makes it easier to compare "before and after" data for modified code, making it easier to diagnose bugs and performance regressions over time.
Major CI hardware overhaul
We also made major upgrades to our builder hardware as well. We added 2 new AMD 9950X3D systems with ZFS raided gen5 4 and 8TB Crucial T705 nvme drives as builders to speed up the bottleneck jobs. We updated the hardware where gitlab itself was hosted to run on a 9950 as well. And we put all the builders on an internal 10GB network along with a local docker registry and a local artifacts cache to reduce the load on the gitlab server. Most recently (this week), we also adding local caching for external packages (e.g. npm and pip) to speed up CI, reduce traffic to external servers, and avoid CI fails when 3rd party servers are offline or operating in a degraded mode.
Updates to the HAF API server stack (version 1.28.6)
After I got through overhauling CI, I was able to finally kick into high gear on updates to the software itself. I made a lot of changes across the repos, but probably the most significant changes are performance changes to HAF and to the most important HAF app (hivemind).
Speedup of hivemind replay from 60 hours down to 24 hours
Hivemind in particular has long been a bottleneck for our release cycle because even on our very fastest machines (which we only have a few of), it took over 2 1/2 days to do a replay, which meant every code change in hive, haf, or hivemind itself required a 2 1/2 testing cycle (and that's neglecting another 14 hours for the replay of haf itself if the change was in hive or haf rather than hivemind). Now hivemind fully replays in a single day, making this much less painful.
HAF API stack expected next week
We've already deployed a release candidate of the new stack to https://api.syncad.com for apps to test their code against. I recommend you all do this quickly, as we'll be deploying the same stack to https://api.hive.blog soon as a truly "final" production test before recommending it to other Hive API server operators.
And now for the truly incomprehensible portion of this post...
Below are the changes to the Hive API calls that may impact Hive app developers:
Breaking Changes (fields removed/added/changed type)
1. HIVE_ENABLE_SMT removed from config
Affected methods: database_api.get_config, condenser_api.get_config, call (wrapping get_config)
The HIVE_ENABLE_SMT boolean field has been removed from the config response. Code
that reads this field will get undefined/KeyError instead of false.
# Before (HIVE_BLOCKCHAIN_VERSION 1.28.3):
"HIVE_ENABLE_SMT": false
# After (HIVE_BLOCKCHAIN_VERSION 1.28.6):
(field absent)
3. stats.muted_reasons — new field on posts
Affected methods: bridge.get_discussion, bridge.get_account_posts,
bridge.get_ranked_posts, bridge.get_post
Posts that are grayed/muted now include a muted_reasons array in stats.
Previously stats.gray could be true but the reason was not exposed.
// Before:
"stats": { "gray": false, ... }
// After — for affected posts:
"stats": { "gray": true, "muted_reasons": [2], ... }
The muted_reasons field is only present when the post is grayed. The array
contains integer reason codes. Multiple reasons can apply simultaneously.
~43 instances observed across 543K requests.
Reason codes
| Value | Name | Meaning |
|---|---|---|
| 0 | MUTED_COMMUNITY_MODERATION |
Explicitly muted by a community moderator |
| 1 | MUTED_COMMUNITY_TYPE |
Post in a journal/council community where author lacks member+ role |
| 2 | MUTED_PARENT |
Reply to a muted post (inherited) |
| 3 | MUTED_REPUTATION |
Author has negative reputation (calculated dynamically) |
| 4 | MUTED_ROLE_COMMUNITY |
Author has a muted role in the community (calculated dynamically) |
These codes can be retrieved programmatically via bridge.list_muted_reasons_enum:
curl -s http://api.hive.blog \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"bridge.list_muted_reasons_enum","params":{},"id":1}'
{
"result": {
"MUTED_COMMUNITY_MODERATION": 0,
"MUTED_COMMUNITY_TYPE": 1,
"MUTED_PARENT": 2,
"MUTED_REPUTATION": 3,
"MUTED_ROLE_COMMUNITY": 4
}
}
4. condenser_api.get_reblogged_by — author now included
The original post author is now included in the reblog list. Previously only rebloggers (not the author) were returned.
// Before (alpha6):
["adifginting", "asterhive", "casberp", ...]
// After (bodyval):
["learnelectronics", "adifginting", "asterhive", "casberp", ...]
// ^ author now included as first entry
This is an intentional bug fix. Code that checks .length or iterates over
rebloggers will see one extra entry.
Value Changes (same fields, different values)
These are due to bug fixes in hivemind 2.0.0dev1. The values change but the types and field names remain the same.
Community statistics (num_pending, sum_pending, num_authors)
Affected methods: bridge.get_community, bridge.list_communities
Pending post counts and payout sums differ slightly due to changes in how
is_paidout is calculated. These are small numeric differences (typically
1–18 fewer pending posts per community in bodyval).
Post counts (post_count)
Affected methods: bridge.get_profile, bridge.get_profiles
Account post_count values are slightly higher in bodyval (typically +1 to +17).
This affects high-volume posters most.
Trending tags statistics
Affected methods: condenser_api.get_trending_tags
Tag-level comments, top_posts, and total_payouts values differ slightly
due to the post counting and payout status changes above.
Children counts
Affected methods: condenser_api.get_content, bridge.get_post
Post children counts may differ by small amounts (e.g., +4) due to the
post counting changes.
Reply ordering
Affected methods: bridge.get_account_posts (sort=replies)
When multiple replies have the same timestamp, the tiebreaking order may differ. This affects which post appears at a given array index but not the set of posts returned.
Non-Changes (confirmed identical)
The following were verified identical across 543K requests:
- Block data:
block_api.get_block,condenser_api.get_block,block_api.get_block_range— all match - Account data:
condenser_api.get_accounts,database_api.find_accounts— match (exceptvesting_withdraw_rate1→0 cleanup on a few accounts, intentional) - Transaction data:
condenser_api.get_transaction,account_history_api.get_account_history— match - Market data:
condenser_api.get_order_book,get_ticker,get_recent_trades— match - Witness data:
condenser_api.get_witnesses_by_vote,get_witness_by_account— match - RC data:
rc_api.find_rc_accounts,rc_api.get_rc_stats— match - Follow data:
condenser_api.get_followers,get_following,get_follow_count— match - Proposal data:
database_api.list_proposals,list_proposal_votes— match