top of page
All Posts


The Day Our Models Got Too Smart For Their Own Good: Fighting Data Leakage at Factal
Why We Don't Want Leaky Models Here at Factal, we love data. We love machine learning. And, let’s be honest, we love seeing validation scores go up. There’s a distinct dopamine hit when you look at a metric dashboard and see accuracy hovering around 97%, 98%, or even the mythical 99%. For a brief, shining moment last week, we thought we were gods. Our latest batch of models - designed to automate some internal categorizations - weren't just performing well; they were operatin
Ritvik Mandyam
Mar 104 min read


The Day Factal Met a Robot with No Soul (and Why That’s a Good Thing)
Building an automated data analyst is a lot like being a digital detective. You want to give it a pile of data, walk away to grab a coffee, and come back to a whiteboard filled with "Aha!" moments. But as we learned recently, even the smartest AI can’t find a heartbeat in a mannequin. Here is the story of how Factal Signals refused to lie to us, and why that makes it the most honest member of our team. What is Factal Signals, Anyway? Factal Signals Before we get into the "dr
Ritvik Mandyam
Mar 43 min read


How We Taught an AI to Navigate a Database That Forgot Its Own Map
Let’s talk about the specific kind of dread you feel when you open a new database schema for the first time. You know the feeling. You’ve been promised a "structured data warehouse." You have visions of clean Third Normal Form, beautiful foreign key constraints, and column names that actually describe what’s inside them. You fire up your connection, query the information schema, and… Silence. Zero foreign keys. No primary key constraints. Table names like xref_p10_not_deleted
Ritvik Mandyam
Jan 226 min read


Stop Trying to "Fix" Broken LLM JSON. It’s Poison.
Let’s have an honest conversation around the digital water cooler. How many hours of your life have you spent writing regular expressions to try and salvage a piece of JSON that an LLM mutilated? You know the drill. You ask GPT-4 for a nice, clean Python dictionary of customer sentiment scores. It thinks for a while, spits out 500 tokens of absolute gold, and then… forgets the final closing curly brace. Or maybe it adds a trailing comma that chokes Python’s json.loads() like
Ritvik Mandyam
Dec 25, 20253 min read


Please, For the Love of Compute, Stop Telling AI to "Think Step-by-Step"
Here at Factal, we have spent the better part of the last few years whispering sweet nothings into the ears of Large Language Models. We, like you, learned the magic incantations. We learned that if you want GPT-4 to solve a math problem or analyze a messy dataset without hallucinating a conspiracy theory, you have to use the Golden Prompt: "Let's think step by step." It was the WD-40 of prompt engineering. It fixed everything. It turned "System 1" rapid-fire guessers into "S
Ritvik Mandyam
Dec 21, 20254 min read


The "Six Degrees of Separation" Problem: Teaching AI to Navigate the Relationship Web
At Factal, we’ve decided that standard schema mapping is for amateurs. We're currently knee-deep in a challenge we call "The Redundant Relative." In complex ecosystems like Salesforce, objects are often related in ways that would make a soap opera writer blush. Let's consider the case of the hypothetical Timesheet Detail object, the one that inspired our last post. It might have a direct lookup to an Account, but it also links to a Case, which - surprise! - has its own looku
Ritvik Mandyam
Dec 19, 20251 min read


The "Case" of the Missing Group By: A Salesforce SOQL Mystery
This image has nothing to do with the post, our founder is just in Cambodia right now 😉 If you’ve ever tried to write a SOQL query and thought, "I’ll just group these results by Case Number, because I am a rational human being," Salesforce has a very special middle finger reserved just for you. In the world of Salesforce, the CaseNumber field is a bit of a peacock. It looks like a string, it acts like a string, but the moment you try to use it in a GROUP BY clause, Salesforc
Ritvik Mandyam
Dec 18, 20252 min read


Counting On LLMs
I spend WAY too much time on Reddit, and recently, I've been noticing an interesting type of post on the ChatGPT subreddit: people are asking GPT 5.2 how many Rs are in "garlic". Way back when (i.e. mid-2024), people used to do a similar trick where they asked GPT 4o how many Rs were in "strawberry", and they got similar results - both these LLMs would regularly come back with the wrong answer (apparently, the Venn diagram of people who like to cook and people who like to mes
Ritvik Mandyam
Dec 15, 20252 min read


Who Survives When the AI Bubble Pops?
Every few decades, the tech industry rediscovers a universal truth: the future is always more expensive than we thought it would be. We’ve hit that point with AI. Right now, we’re still drunk on the miracle of it all - chatbots that reason, copilots that code, agents that talk. It’s intoxicating. It also costs a small fortune to run. The dirty secret is that almost every “AI company” in your feed is currently subsidized by venture capital or by model providers who are themsel
Ritvik Mandyam
Oct 21, 20258 min read
bottom of page
