Postmortem from 2028

From The 2028 Global Intelligence Crisis: A Thought Exercise In Financial History, From The Future

What follows is a scenario, not a prediction.

[...]

A competent developer working with Claude Code or Codex could now replicate the core functionality of a mid-market SaaS product in weeks. Not perfectly or with every edge case handled, but well enough that the CIO reviewing a $500k annual renewal started asking the question “what if we just built this ourselves?”

[...]

The interconnected nature of these systems weren’t fully appreciated until this print, either. ServiceNow sold seats. When Fortune 500 clients cut 15% of their workforce, they cancelled 15% of their licenses. The same AI-driven headcount reductions that were boosting margins at their customers were mechanically destroying their own revenue base.


One Weird Trick to Fix Linker Errors on Apple Silicon After Restore

This post might be useful for maybe like 10 people who are running into issues when compiling software on Apple M* machines.

If you run into error like this:

ld: warning: ignoring file '/usr/local/lib/libpng.dylib': found architecture 'x86_64', required architecture 'arm64'
ld: warning: ignoring file '/usr/local/lib/libavformat.dylib': found architecture 'x86_64', required architecture 'arm64'

And you have installed these packages using brew. I verified that these dylibs are definitely for x86_64 and the linker was not going crazy. (use lipo -archs /usr/local/lib/libavformat.dylib)

Well, that was caused because  I had installed the new M3 macbook from backup (Timemachine) and the backup was created from an x86_64.  

The fix is described here

brew bundle dump --global
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/uninstall.sh)"
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew bundle install --global

This first creates a ~/.Brewfile and then installs back those packages.


build your own base debian docker image

This is for somewhat paranoid people. I am not a fan of grabbing docker images from docker hub, especially for base OS images. I would rather build my own images, the process is not too complex. Below is the quick way to get a base debian os container build:
sudo apt-get install debootstrap
sudo debootstrap jessie jessie/
sudo sh  -c "cd jessie/ && tar cf ../jessie.tar ."
sudo sh -c "docker import - debootstrap/jessie < jessie.tar"
Now you can check if you have the images:
$ sudo docker images

REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
debootstrap/jessie      latest              814d88e17a23        15 minutes ago      274MB

$ sudo docker run -i -t debootstrap/jessie /bin/bash
root@92d078f4f147:/# lsblk ^C


Faster setup of virtualenv with devpi

Setting up the virtualenv can take a significant amount of time because it pulls down packages from PyPI. I was surprised that it does not try to use the locally installed packages. One way to speed up the rebuild is to use a local caching mirror of PyPI. This can be accomplished by using devpi.

Step 1: Install devpi package

sudo pip install devpi

Step 2: Add the following lines in your /root/.pip/pip.conf file

[global]
index-url  = http://localhost:3141/root/pypi/+simple/
extra-index-url  = https://pypi.python.org/simple/

From now on, pip will first try to get the package from devpi-server running on your localhost first and if you do not have devpi-server running, will fallback to pypi.python.org .

Step 3: Start the devpi-server on your localhost: devpi-server. Try installing a few packages or build a virtualenv. The command devpi-server -start will start the server and put it in background.

TODO - figure out how to start this from init.


They’re our servants, tools

"You can see that our real problem is another thing entirely. The machines only do figuring for us in a few minutes that eventually we could do for our own selves. They’re our servants, tools. Not some sort of gods in a temple which we go and pray to. Not oracles who can see into the future for us. They don’t see into the future. They only make statistical predictions—not prophecies. There’s a big difference there, but Reinhart doesn’t understand it. Reinhart and his kind have made such things as the SRB machines into gods. But I have no gods. At least, not any I can see."

The Variable Man, by Philip K. Dick

Defined tags for this entry: , , ,

Caching is not a silver bullet

Let us take a this hypothetical situation. You have to serve a web page. You want the whole page to be sent back in 500 ms (milliseconds). If your user has a good network and he is not too far from your webserver, you can further assume that around 50 ms will be spent on the network. This means that you have 450 ms to collect all the data about this web request, do the fancy manipulations (sorting/filtering/updating files etc.) and serve it to the user. You need to make four external calls to get this data - 2 of them to an external web service and 2 of them to your own database.

Now assume that one of your external webservice calls take one second to send back the result 50% of the time and one of your database queries can take upto a second to give back the result 25% of the time. What will you do to make sure none of your users ever have to wait for more than 500 ms to get back the page? (500 ms excludes the time taken to download the images/css/do fancy javascript magic).

Read more on my website



Page 1 of 5, totaling 32 entries