A few days ago, James Gallagher wrote a careful, measured feature for BBC News about a woman named Abi. She lives with health anxiety, and for the past year she has been turning to a chatbot for medical guidance.
Sometimes it helped her. It looked at her symptoms, suggested a pharmacist, and she walked away with the right antibiotic. Once, after a fall on a hike, it told her she had punctured an organ and needed to go to A&E immediately. She sat in the emergency department for three hours, the pain eased, and she went home. The chatbot had been wrong.
You can read the article here. It is worth your time.
Abi is not unusual. A lot of us quietly do the same thing. We type a symptom into a chatbot, hope for a second opinion, and feel a small comfort that we have asked somebody. The software is always awake. It never rushes us. It sounds thoughtful.
That feeling of being attended to is exactly the problem.
What the research is saying
The piece cites two studies that are worth sitting with, because they are telling us something about design, not just about software mistakes.
The Reasoning with Machines Laboratory at the University of Oxford gave chatbots detailed, complete clinical scenarios written by doctors. The chatbots were 95 percent accurate. Almost perfect. Then they ran the same exercise with 1,300 real people, who were asked to describe each scenario to the chatbot in their own words. Accuracy collapsed to 35 percent. Two-thirds of the time, people walked away with the wrong advice.
Professor Adam Mahdi, one of the researchers, put it plainly. "When people talk, they share information gradually, they leave things out, they get distracted."
The problem was not the software's cleverness. The problem was the conversation.
The Lundquist Institute in California tested five well-known chatbots across cancer, vaccines, stem cells, nutrition, and athletic performance. More than half of the answers were classed as problematic. One chatbot suggested naturopathy as a treatment for cancer.
The lead researcher, Dr Nicholas Tiller, is pointing to a design choice rather than a mistake. "They are designed to give very confident, very authoritative responses, and that conveys a sense of credibility, so the user assumes that it must know what it's talking about."
Confidence is a feature, not a bug. And it is the exact feature that is misleading people about their health.
Where we agree
This article was not written about us. It was written about a different category of product entirely. But the lessons travel, and we agreed with four of them long before the piece was published.
The conversation is where accuracy breaks down. Real people do not submit neatly organised case studies. They ramble, contradict themselves, downplay, escalate, circle back to the same feeling. Any software built to respond to them has to account for that, not assume clean inputs.
Confident tone is dangerous. Software that sounds certain is more convincing than it deserves to be. The reader does not calibrate for the possibility that the answer is wrong.
Most people cannot tell when the answer is wrong. This is what Dr Tiller meant. If you had the expertise to catch the mistake, you would not be asking in the first place.
The absence of a professional is not a neutral absence. An honest conversation with a GP has interruptions, follow-up questions, a second look, caution, and the option to escalate. A chatbot has none of that by default.
Professor Sir Chris Whitty, England's Chief Medical Officer, described current answers as "both confident and wrong". That phrase stayed with me for a few days. It is the cleanest description I have read of what makes these tools risky for the people who are most likely to turn to them.
Where our thinking went
ManaSmurti was never designed to be a health information service. We decided that very early, and quite deliberately.
We made four choices that look boring on paper but are, in hindsight, the whole reason we can sit honestly alongside the BBC piece rather than flinch at it.
One. We are a companion, not a diagnostic tool.
We do not diagnose. We do not prescribe. We do not look at a list of symptoms and form a view. ManaSmurti is for the part of your day that comes before, or after, or alongside a professional conversation. The part where you are trying to name what you are feeling, or think through a messy situation, or steady yourself at the end of a hard day.
Self-help is not self-diagnosis. Those two ideas get collapsed into the same bucket because a chat interface looks the same no matter what sits on the other side. But they are different. Self-help is the work of understanding yourself. Self-diagnosis is the work of trying to replace the doctor. One is a practice. The other is a shortcut.
Two. We built a safety layer on the assumption that people will still mention medical things.
People do not stay inside the lane you designed for them. Someone using ManaSmurti to talk about loneliness might, mid-conversation, mention chest pain. Someone working through a breakup might mention that they have not slept for four nights. Someone talking about work stress might mention a new lump.
For these moments, we built a layer that sits between the conversation and our companion. It recognises when what the person is describing is medical, clinical, or potentially urgent. It stops the companion from giving an answer, and instead names what it has noticed and points the person to a real doctor, a real helpline, a real hospital. We made it the first thing that runs on every message, not the last.
The helplines we point to are curated, India-first, and published openly. iCall. Vandrevala Foundation. AASRA. NIMHANS. Sneha India. CHILDLINE. NALSA. They are not hidden inside a disclaimer. They are the response.
Three. Every page says what we are, and what we are not.
This sounds like a compliance line, and in some ways it is. But we also believe it shapes the conversation itself. If you arrive expecting a doctor, you will hear a doctor. If you arrive knowing that you are going to talk to a companion, you will listen differently. The framing is part of the product.
Our disclaimer lives on the home page, on the pricing page, above the chat, and inside the chat. Not once, tucked away. Repeatedly. Visible.
Four. We treat hard questions with an exit, not a guess.
When you ask us something that belongs to a doctor, we do not try. We do not hedge, offer a gentle guess, or list possibilities. We tell you, in plain language, that this is the kind of question that needs somebody qualified, and we point you towards that person. The BBC piece describes a specific scenario where subtle differences in how a person phrased stroke symptoms led different chatbots to give wildly different advice. We do not play that game. The space where we decline to answer is as important as the space where we respond.
The self-help system we actually built
ManaSmurti is a small set of short, structured practices that help you sit with a difficult feeling or decision. None of them are medical.
Name It helps you put words to a feeling when it is too tangled to describe. Three steps. A few minutes.
Steady is for moments of anxiety. It guides you through a grounding pattern and a small reflection, and it is built to be finished in under ten minutes.
Crossroads is for when you are stuck on a decision. It walks you through a structured thinking process, without telling you the answer.
Chai Break is for the end of a heavy day. You put your phone down, take a moment, and come back with a small keepsake of what the day felt like.
Virama is a visual, slow, screen-based rest. It is not a meditation. It is closer to what your eyes feel when they finally look away from work.
Together they form a library of self-understanding, not self-correction. They do not tell you what is wrong with you. They help you notice what is there, and give it a shape.
This is what self-help has always meant in the traditions it comes from. The Indian practice of svadhyaya, or self-study, is reading yourself. It is not a practice of fixing yourself without help. The same is true of stoic journalling, of cognitive reframing, of the quieter forms of reflective writing. Self-help has always assumed that you also have teachers, elders, doctors, and friends.
Our honest limits
If you describe something physical or medical to ManaSmurti, we will ask you to see a doctor. If you arrive in a crisis, we will give you a helpline and stay with you for the moment, but we will not try to counsel you through it. If you describe ongoing symptoms that belong in a clinical setting, we will tell you, in plain language, that this is something a professional needs to see.
We are comfortable with those limits. They are the product.
The BBC piece ends with Abi saying she still uses chatbots for health advice, "with a pinch of salt". I understand her. Waiting weeks for a GP in the UK is not nothing. The same is true here, in India, in a different form. Qualified mental health professionals are expensive, geographically uneven, and often booked out for months.
We did not build ManaSmurti to replace any of that. We built it for the parts of the week in between. The hard afternoon before your therapist's next appointment. The Sunday night when your head is too loud to sleep. The moment when you are trying to name what is wrong, before you can ask anybody else to help.
Self-help should feel like a friend thinking alongside you. Not like a doctor pretending to be one.
That is the line we are trying to hold.
