Mr. Spock meets Pythagoras

No Comments

Over Easter break, I did what probably counts as a very specific kind of relaxation: I disappeared into philosophy, AI, Docker logs, and GPU temperatures, and came back with something called PhiloGPT.

The question.

The original idea was not “let me build a platform.” It was much smaller and more personal than that. I wanted to explore a question that has been sitting in the back of my mind for a while: what happens when you treat dialogue itself as the interface for thinking?

Reading philosophy is one thing. You can read Plato, Marcus Aurelius, Nietzsche, or Kant and feel like you understand the shape of an idea. But dialogue is different. In dialogue, ideas push back. They expose contradictions. They force clarification. They stay with you longer. I wanted to build something that could support that kind of exchange, not as a one-off chatbot gimmick, but as a system I could actually live with and improve.

So I started building.

What the system needed.

At first, it was the usual optimistic version of a side project: a frontend, a backend, a database, a model endpoint, and the naive belief that once the messages were flowing, the hard part was over. It was not over.

The first thing I learned was that good dialogue depends on memory. If every session starts from zero, the conversation stays shallow. So I added persistent client memory, not because “memory” sounds impressive in an AI demo, but because without it, the system kept forgetting the very things that make a real conversation meaningful.

Then I ran into a different problem: models are fluent, but fluency is not the same as grounded knowledge. A philosophical exchange falls apart quickly when names, dates, schools of thought, or historical details get fuzzy. That is why the Wikipedia tool exists. Not for novelty. Not for a flashy checkbox. It exists because the conversation became more honest and more useful once the system could ground itself in facts instead of bluffing.

Then came structure. Open-ended dialogue can be beautiful, but it can also drift. For more reflective or counseling-style interactions, I needed a way for the system to remember where the conversation was heading, what had already shifted, and what the next step might be. That became the counseling plan tool. Again, not because I wanted more tools, but because the conversation itself kept showing me what was missing.

And because some tasks need explicit reasoning rather than beautifully improvised guessing, I added a sandboxed System2 tool for constrained logic and code execution. That made the system feel less magical and more inspectable, which I increasingly think is the healthier direction for AI systems in general.

When it grew up.

Somewhere along the way, the architecture grew up. What started as “something on my machine” turned into a proper deployment. The stack now runs through Docker, with the public-facing traffic coming in through a Synology reverse proxy, while a separate GPU server with an RTX 4090 and 24 GB VRAM runs Ollama locally for model inference. That gave me something I cared about from the start: control. Control over the infrastructure, control over the data path, control over the models, and control over cost.

That local-first setup also changed the feel of the project. Once Ollama was in the loop, this stopped being “yet another app that forwards prompts to a cloud API” and became something I could actually shape end to end. At the same time, I did not want to hardwire the whole system to one provider, so I added a provider abstraction layer. That means I can run local models when privacy and cost matter most, and still switch to OpenAI-compatible endpoints when capability matters more. The same runtime, different tradeoffs.

Where it got real.

The less glamorous part, and probably the more important part, was learning where systems actually break. Latency was one lesson. A single tool call could easily add enough delay to ruin the feeling of dialogue. One Wikipedia path was taking far too long, and tracing that through the stack led to a satisfying but humbling discovery: I was doing extra work because I had designed the flow badly, not because the universe was against me. Another lesson was production drift. Something that behaved perfectly locally turned into a visible bug in production because an LLM config field was never actually being persisted. That was a good reminder that software only becomes real when it survives contact with deployment.

That is also why I cared about versioned seed patches and steered updates. I did not want every deployment to feel like rolling dice. So the project now carries its own evolution path: schema changes, default data updates, and configuration updates are version-managed and applied deliberately instead of through hopeful manual steps. I suspect the MLOps folks will appreciate that this part gave me almost as much satisfaction as the model work. It turns out I enjoy the intersection where model behavior, product design, and operational discipline all have to work together.

What it became.

What I like most about the project is that it still feels personal even though it is now much more than a toy. It started as a nerdy Easter-break experiment about mind, memory, and philosophy in dialogue. It turned into a self-hosted multi-persona AI platform with real-time chat, tool use, deployment discipline, and a cleaner separation between product ideas and infrastructure than I expected when I began.

If you want to try it.

If you want to try it, it is live here:

https://philogpt.truemper.cc/

And if you want to look at the code or contribute, the repository is here:

https://github.com/trueal82/philoGPT

I am sharing it partly because I think it is genuinely interesting, and partly because projects like this get better when other people poke at them, challenge the assumptions, and suggest directions I would not have thought of on my own.

If any of that resonates.

So if you are into AI systems, dialogue design, local inference, MLOps, philosophy, or just slightly overengineered holiday projects, I would be happy to compare notes.

Categories: Uncategorized

The Architecture of Dependence: Personal Identity, Big Tech, and Europe’s Hidden Single Points of Failure

No Comments

In 2022, a father took a photo of his toddler’s medical condition at the request of a doctor. The image was uploaded to Google Photos. Google’s automated systems flagged the content. His account was suspended. Law enforcement investigated and cleared him of wrongdoing. His Google account — containing years of personal data — was not restored [1].

This was widely discussed as a moderation controversy.

It was not only that.

It was an infrastructure event.


1. When Identity Fails, Everything Fails

A modern personal account is not merely a login. It is a root identity layer that governs:

  • Email
  • Cloud storage
  • Device synchronization
  • Password vaults
  • Passkeys and multi-factor credentials
  • Third-party authentication via federated login
  • Purchased digital media

When that root account becomes inaccessible — whether through policy enforcement, automated fraud detection, or sanctions — the failure cascades across all dependent layers.

This is not a moral argument. It is a systems observation.


2. Federated Identity and the Centralization of Trust

Modern authentication increasingly relies on federated identity, where a central Identity Provider (IdP) authenticates users for multiple services [2]. Third-party platforms defer authentication to that provider rather than managing credentials independently [3].

This reduces friction and improves usability.

It also concentrates trust and availability into a small number of providers.

The architectural question is not whether this model works. It clearly does. The question is what happens when the identity layer becomes unavailable.


3. The Governance Gap: Enterprise vs. Personal Identity

Single Sign-On (SSO) is well understood to create structural concentration. Industry documentation explicitly acknowledges the “single point of failure” characteristic in centralized authentication models [4], and security analysis highlights the enlarged blast radius when credentials are centralized [5].

Both enterprise and personal SSO architectures contain this technical concentration.

The difference is governance.

Enterprise systems operate under contracts, SLAs, administrative backdoors, internal escalation processes, and legal recourse. If a corporate identity account is misflagged, there are intervention mechanisms.

Personal accounts typically operate under automated enforcement and Terms-of-Service frameworks. When an algorithm triggers a restriction, there is often no guaranteed human review or time-bound escalation path.

The technology may be similar. The recovery architecture is not.


4. Passkeys: Security Improvement, Structural Tradeoffs

Passkeys, based on FIDO standards, significantly improve authentication security by replacing passwords with phishing-resistant cryptographic credentials [6].

However, implementation models differ.

  • Synced passkeys replicate credentials across devices for convenience.
  • Device-bound (hardware) passkeys remain tied to a physical authenticator.

Articles summarizing FIDO2 research note that synced models introduce ecosystem dependencies distinct from device-bound credentials [7].

Both models improve security.

Only one reduces dependency coupling.

The tradeoff is usability — and modern ecosystems optimize for convenience.


5. Automated Enforcement as Infrastructure Control

In the father’s case, automated content scanning triggered account suspension [1].

This pattern repeats in other contexts: fraud detection systems, AI moderation models, payment anomaly detection, or geopolitical sanctions can restrict access without prior human review.

When identity becomes infrastructure, automated enforcement becomes infrastructure control.

In tightly coupled systems, the algorithm is not merely moderating content. It is determining access to communication, storage, and authentication.


6. The Hardware Illusion

In 2025, developer Paris Buttfield-Addison reported that his Apple ID was disabled after a gift card redemption attempt, leaving him without access to decades of digital assets and rendering hardware deeply constrained within the ecosystem [8].

This illustrates a structural reality:

Owning hardware does not guarantee operational control if the identity layer governing that hardware is externalized.

The device may be physically present. Its functional autonomy depends on identity continuity.


7. From Individual Lockout to Strategic Exposure

In 2025, following U.S. sanctions related to the International Criminal Court (ICC), Microsoft suspended the email account of ICC Chief Prosecutor Karim Khan, affecting operational communication at a European-based international institution [9].

This event was not about personal inconvenience.

It demonstrated that when identity and communication infrastructure are externally governed, geopolitical actions can directly impact institutional continuity.

Now aggregate that principle:

When millions of citizens depend on foreign Identity Providers for authentication and recovery, citizen-level dependency scales into national-level exposure.

This is where “digital sovereignty” stops being rhetorical and becomes architectural.


8. Dependency Coupling

Each individual design choice is rational:

  • Federated login reduces password fatigue [2].
  • SSO simplifies authentication [4].
  • Passkeys reduce phishing risk [6].
  • Cloud sync improves usability.

Together, however, they create dependency coupling — a condition in which multiple critical system functions rely on the same root authority.

The fragility does not arise from any single component.

It arises from the density of coupling.

In enterprise architecture, dependency coupling is analyzed, stress-tested, and contractually mitigated. In personal ecosystems, it emerges organically through convenience.


9. Sovereignty as Systems Property

A sovereign cloud without sovereign identity is only partially sovereign.

If authentication, recovery, and credential synchronization remain externally governed, infrastructure autonomy remains incomplete.

The issue is not vendor nationality.

The issue is architectural concentration without fallback pathways.


10. Construction, Not Panic

Resilience does not require isolation. It requires optionality:

  • Decoupled authentication strategies
  • Offline recovery mechanisms
  • Reduced dependency density
  • Device-bound credentials where appropriate
  • Architectural awareness at both citizen and policy levels

This discussion sits at the intersection of cloud architecture, AI governance, identity systems, and European strategic autonomy.

If you are working on sovereign cloud infrastructure, independent AI ecosystems, or resilient identity architectures — I would welcome a structured exchange.

Procrastinating by being busy optimizing

No Comments

If I just had that perfect keyboard. It would make my workflow just so much better. Certainly I can invest some time to find the perfect one. (BTW, if you know me for a while, I had at least 20 keyboards in the last ten years)

Oh, and if I just had the perfect Headphones. It would make my workflow just so much more enjoyable.

Oh, and if I just had that perfect pen…

I think you got the point? There are two mechanisms working here:

  • Procrastination
  • Wanting mind

The procrastination part is certainly hitting very close to home for me. I am learning math, which to some part seems boring, and to some part is quite challenging. It is also a area of past struggles for me, which I try to evade.

And then there is the Wanting Mind. “If I just had” is a good indicator for this. There is a deep belonging inside me, to be recognized as a nerd. By having stuff. Also, there is a side which tells me “I deserve it”. Well, all of this, of course, is just some distraction from my fear of not being enough. Of me being an imposter.

Good thing, if I notice that. In that case, I can be aware and focus back on my math learning. However, sometimes that can consume several hours.
Then it’s worth writing a blog post about it.
To avoid going back to work.

Categories: Uncategorized

Messing with local ports

No Comments

Being able to develop stuff locally is certainly on of the points I love the most working with Azure and Python. Later, you can hyper-scale the shit out of your code, but you can also debug it first on your tiny machine.

if __name__ == '__main__':
    # This code only runs when executing the script directly (not on App Service)
    logging.info("Starting Flask app in local development mode")
    socketio.run(app, host='0.0.0.0', port=5000, debug=True)

This piece of code is probably run a thousand times every day by a thousand developers. However, it was quite difficult to find an easy answer on google for this:

OSError: [Errno 48] Address already in use: ('0.0.0.0', 5000)

Easy to understand what it means, like port 5000 is in use already. WTF? Who is running something on my machine???

Of course, me. Another PyCharm window or something. But no, not this time. Strange. `netstat to the rescue 🙂 Well, not on macOS.

For whatever reason it cannot show you ports in LISTEN, at least that I could find. But how can we figure it out? lsof can help us here:

lsof -i :5000
COMMAND   PID   USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME
ControlCe 650 trueal   10u  IPv4 0xa162a6f756348686      0t0  TCP *:commplex-main (LISTEN)
ControlCe 650 trueal   11u  IPv6  0x2c61017c47991be      0t0  TCP *:commplex-main (LISTEN)

WTF is a commplex? Another round of googling shows it belongs to AirPlay. So technically, ever developer on macOS should not be able to run the demo code?? So at least now you know, too, what is hogging your precious port 5000. You’re welcome.

Categories: Selfhosting

WordPress Admin Panel extremely slow

No Comments

Hosting a blog yourself is certainly one of the best things to do. Not only will you have full control of your data, but you will certainly learn a lot about technology along the way.

So today I decided to start writing a blog, once more, and hosting it myself. Here. Post 1.

But the /wp-admin/ panel is so painfully slow. The pages take up to six seconds to load.

But how can you figure out what is happening? I mean, sure, it could be my server. It’s just a Synology with a simple 4 core CPU.
But then, it’s just PHP?

So I found this nice plugin for WordPress, called Query Monitor. I will be visible in the top bar of the admin panel and show you exactly what is happening. And I found that:

cURL error 28: Resolving timed out after 3001 milliseconds

Seems like the admin panel is trying to connect to http://api.wordpress.org/translations/core/1.0/ and is running into a timeout.

Strange. Connecting via ssh and running a nslookup is also slow as hell. So it really seems, that the DNS lookup is somehow broken. In my setup, my router (a FritzBox) is causing the delay in DNS lookups.

Solution: Switching to public DNS servers 1.1.1.1 and 8.8.8.8 in the DSM settings, and problems all gone.

Next step: run a proper DNS caching server at home. I think it will be PiHole, but that’s another blog post 🙂