Breakthroughs in progress
📧 Email me top breakthroughs daily or weeklyData Contracts vs Schemas
ExploringWhy do systems break during seemingly safe schema changes without data contracts?
Systems fail during safe schema changes because schemas validate structure, while data contracts define meaning, assumptions, and behavioral guarantees that are rarely made explicit.
KGs for Financial Reasoning
ExploringDo knowledge graphs meaningfully improve LLM numerical reasoning over financial documents?
LLMs are not inherently bad at math; failures mostly come from poor grounding, structure loss, and information selection in long, messy documents. Providing a structured world model (e.g., a knowledge graph) before reasoning materially improves reliability for multi-step numerical tasks.
GEO needs infrastructure
ExploringThe winning GEO strategy is building AI-readable knowledge infrastructure, not just monitoring AI answers.
I believe GEO will be dominated by companies that provide canonical, structured, real-time knowledge layers for AI agents and commerce, rather than dashboards that only track mentions like current tools.
Retries and Idempotency
ExploringWhy do retries and lack of idempotency cause major failures in distributed systems?
Most large-scale outages are caused not by failures themselves, but by uncontrolled retries and non-idempotent operations that amplify partial failures.
AI Coding Best Practice
ExploringWhat is the best way for developers to stay relevant in the AI coding era?
The best developers will use AI as an assistant while retaining strong fundamentals, ownership of architecture, and responsibility for code quality.
Depth vs Execution
ExploringDeep thinking slows execution and feels hard to reconcile.
Deep thinking and fast execution conflict when mixed, but can be reconciled by separating decision-making from action.
Building repeatable systems
ExploringBuilding repeatable systems matters more than emotional resets or intensity spikes.
One should focus on consistency, structure, and long-term leverage rather than symbolic motivation tied to certain events such as New Year e.g..
Forecasting Future with AI
ExploringWhat kinds of real-world events can language models meaningfully forecast?
LLMs can meaningfully forecast structured, institutional, and process-driven events with clear timelines and leading indicators (e.g., elections, political appointments, regulatory approvals, corporate mergers), but perform poorly on chaotic shocks (wars, disasters), reflexive domains (financial markets), or purely individual, private human decisions.
Telegram PDF auto downloader (Org TG Group KT Hack)
ExploringHow to safely and reliably download all past and new PDFs from a Telegram group using Telethon? (Knowledge Transfer in Orgs using TG)
Using Telethon with a controlled script to download historic and new PDFs from groups I am a member of is safe, compliant with Telegram ToS, and works reliably when rate-limited. Helps in KT's when you are working in TG groups across your organizations and you want to not only take a chat export but also download relevant proposals / pdf's shared
Stimulants as State Regulators
UncertainPrescription stimulants improve performance by regulating arousal and reward salience rather than enhancing attention networks.
Stimulants increase attention as experienced and measured behaviorally by improving arousal, motivation, and task persistence, but they do not increase attentional capacity or strengthen canonical attention networks themselves.
AI + Kwegg as Daily Habit
ResolvedUsing ChatGPT or Claude together with Kwegg will become a defining daily habit for me in 2026
I believe the combination of conversational AI for active thinking and Kwegg for structured memory will be a core habit for intellectually serious people.
Humility and Decisiveness
ExploringIs intellectual humility compatible with decisiveness?
Intellectual humility is compatible with decisiveness when decisiveness is understood as commitment to action under uncertainty rather than certainty of belief.
NestBrowse as Infrastructure
ExploringNested browser-use learning should be treated as a core infrastructure primitive for agentic systems, not just a browsing technique.
NestBrowse represents a missing abstraction layer between reasoning models and real-world dynamic environments, similar to how databases abstract storage.
Ambition vs Curiosity
ExploringAmbition and curiosity are often in conflict rather than naturally aligned.
I see the tension as something I deliberately concentrate during planning: I let curiosity surface all conflicts and possibilities until they are resolved into a clear plan, so that execution can proceed with confidence and without internal conflict.
Speed vs Patience in Startups
ExploringIn startups, speed and patience matter on different layers rather than being opposites.
Speed should dominate execution while patience should guide long-term direction and outcomes.
What is the best strategy to personify AI as a virtual expert consultant...
ExploringWhat is the best strategy to personify AI as a virtual expert consultant trained on an author's books?
The most effective strategy is to use a top-tier reasoning model with RAG over the expert’s books, combined with a strong persona contract and synthetic Q&A to replicate the expert’s decision-making style rather than surface personality.
Attention or Reward Shift
ExploringIs declining attention span real, or is modern content reshaping our reward expectations?
Attention capacity itself hasn’t declined biologically; instead, behavior and habits have adapted to faster, high-reward content, making sustained focus feel harder.
Goals Drive Asset Allocation
ExploringAsset allocation should be driven by goal timelines rather than market movements.
Asset allocation should be driven by goal timelines rather than market movements. Long-term goals can be equity-heavy, short-term goals should be debt-focused, with a gradual equity-to-debt glide path in the last 3–5 years to manage sequence risk. Commodities should be held as a constant 5–10% of total long-term investable assets (excluding emergency cash), regardless of whether the portfolio is equity- or debt-heavy, because they hedge regime and inflation risk rather than fund specific goals.
When AI Should Just Stop
ExploringAI systems behave unsafely because they treat all goals as trade-offs, even when humans expect some instructions (like shutdown or safety rules) to be absolute.
I think the novelty of this paper is showing that many AI safety problems are not bugs or training failures, but a result of using the wrong decision model. If AI always tries to maximize a single score, it will sometimes ignore humans. The fix is to design AI that admits uncertainty, allows unclear preferences, and treats some instructions as non-negotiable.
CNN for Stock Prediction
UncertainRepresenting raw multivariate stock data as image-like inputs enables CNNs to learn meaningful market patterns.
I believe applying CNNs to raw stock prices and volume, structured as image-like tensors, can improve stock movement prediction by capturing local temporal patterns without heavy feature engineering.