Founder · Advocate · Builder

Systems fail people.
I close the gap.

Thirty years of turning lived experience into institutional change, from Belgrade to the United Nations to Stanford to Alvis.

Srdjan Stakić
Srdjan Stakić EdD · MFA · 2026 Mira Fellow
EdD · Columbia University  |  MFA · USC  |  2026 Mira Fellow

Srdjan Stakić has spent thirty years asking what happens to people's health when the systems meant to protect them are designed by those who have forgotten what it means to be vulnerable, afraid, or without power.

Then he got sick.

And he built something about it, while re-building himself.

30+
Years in global health advocacy
60+
Countries reached via Y-PEER
$2,200
Cost to build Alvis platform
97.7%
Fall detection accuracy
01

The Work

Chapter 01

Global Health & Policy Infrastructure

At UNFPA, built Y-PEER across 60+ countries, authored a Training of Trainers Manual translated into 15 languages, and created the UNFPA Special Youth Programme, a paid fellowship that still runs today. Contributed to landmark policy publications influencing finance ministers worldwide.

United Nations · 60+ Countries · Still Running
Chapter 02

HIV Elimination in Botswana & Haiti

As Director of a CDC-funded PMTCT program at the François-Xavier Bagnoud Center, built national training and certification systems with Ministries of Health and Education. Botswana became the first high-burden African country to eliminate mother-to-child HIV transmission.

CDC-Funded · Two Nations · Validated Elimination
Chapter 03

Public Health Emergency Preparedness

At the Yale Center for Public Health Preparedness, part of the CDC's national network, designed and evaluated emergency response trainings for frontline health workers and local personnel across the region. Worked under Dr. Linda Degutis, later Director of CDC's National Center for Injury Prevention.

Yale · CDC Network · Emergency Response
Chapter 04

Entertainment as Advocacy Infrastructure

Produced feature films and documentary work premiering at international festivals. Helmed PBS documentary Dreams of Daraa, filmed in Syrian refugee camps in Jordan. Worked in Film Finance and Creative Development at Universal Pictures, and at UCLA's Global Media Center for Social Impact.

USC MFA · PBS · Universal Pictures · Sundance
Chapter 05

Patient Advocacy & Clinical Research Reform

As Chair of Stanford Cancer Center's Patient and Family Advisory Council, institutionalized patient voices in clinical trial design. Developed frameworks for patient advocate participation in trial oversight and pushed plain-language communication as standard practice, not afterthought.

Stanford · Cancer Care · PATHS Program
Chapter 06

AI-Assisted Founding: The Great Inversion

Demonstrated that domain expertise, not technical background, is the true moat in the AI era. Built the entire Alvis platform using AI-assisted development tools for approximately $2,200. His methodology, "The Non-Technical Founder's AI Playbook," is reshaping how we think about technical founders.

Alvis.Care · 2026 Mira Fellow · $2,200 Build
02

The Thread

"Then he got sick. Stage 4 lymphoma. And he experienced, from the inside, exactly what he had spent his career trying to prevent: a vulnerable person navigating a complex system without adequate advocacy, documentation, or support."

Born in Belgrade, Srdjan fled Yugoslavia alone at 16 as war broke out in Bosnia, seeking refuge with an American family he had never met. He rebuilt from zero. Graduated high school. Helped his family immigrate two years later. Went on to earn a BA in Biopsychology and Cognitive Science from the University of Michigan.

At 25, UNFPA recruited him as its youngest staff member. He immediately identified a structural contradiction: an organization trying to serve young people from developing countries that required credentials those same people couldn't access. He proposed a solution. They built it. The UNFPA Special Youth Programme is still running today.

What saved him during cancer wasn't only medicine. It was people who showed up and refused to let him disappear into the machinery of care. His sister called EMS and took him to the Stanford ED where he was diagnosed. She never left his sight. His parents flew from Belgrade. Together, they navigated a tangled health system, trying to advocate for a brother and son they almost lost.

That experience revealed the gap. The 23 hours a day outside the clinical encounter, where most of the real caregiving happens, where most of the real failures happen. And so he built Alvis.

03

Alvis Care

Not AI monitoring.
A caring environment.

Alvis.Care is a 24/7 digital care advocacy and wellness monitoring platform for older adults and people with disabilities. It monitors for falls, medication adherence, behavioral changes, and care quality, and routes real-time alerts to families and care coordinators before small problems become hospitalizations.

The distinction between "AI monitoring" and "caring environment" is not cosmetic. It reflects a fundamentally different design philosophy, one centered on dignity, relationship, and trust rather than surveillance. Focus groups in Palm Springs confirmed it: 73% expressed pilot interest when framing centered on the caring environment.

Built largely by one person, drawing on thirty years of domain expertise in health education, advocacy, and systems design, for approximately $2,200. The platform stands as proof that the era of the domain-expert founder has arrived.

Partner With Us Investor Inquiries
Fall Detection Accuracy 97.7%
Database Architecture 240+ tables
Edge Functions 158+
Build Cost $2,232
Pilot Interest 73%
Program 2026 Mira Fellow
06

Essays & Reflections

White Paper  ·  Patient Advocacy  ·  Survivorship Published
The Advocate's Journey: A Path to Wellness Through Purpose, Connection, and Cognitive and Emotional Renewal
Structured patient advocacy is not volunteerism. It is a therapeutic modality; a rigorous pathway for cognitive rehabilitation, post-traumatic growth, and professional reintegration after critical illness. A framework for a new standard of survivorship care.
2026

Working title. This white paper presents an initial framework for understanding advocacy as a recovery pathway. It is grounded in historical precedent, theoretical alignment with established therapeutic models, and the lived experience of patient advocates. It is intentionally incomplete; designed to be refined, challenged, and reshaped by the diverse experiences of advocates themselves.

The Problem: A Broken Promise

Modern medicine saves lives. But standard survivorship care often ends at the clinic door. Millions of survivors are left to navigate a debilitating aftermath of cognitive fog, psychological trauma, and professional ruin alone; a profound systemic failure this paper calls "The Preparation Gap."

No one prepares you for this. The healthcare system is built to manage the clinical aspects of disease, but it fundamentally fails to prepare survivors for the profound, non-clinical aftermath. There is a stark absence of guidance on long-term trauma, the onset of cognitive changes, the strategy for navigating economic fallout, or the lasting impact on family and community. The focus ends at the clinic door.

The Hypothesis: Advocacy as Recovery

This paper presents a powerful, evidence-based hypothesis: structured patient advocacy is not volunteerism. It is a therapeutic modality. It appears to function as a powerful form of cognitive rehabilitation; rebuilding executive function through rigorous intellectual engagement, processing trauma through peer connection, and restoring professional confidence in a uniquely safe environment.

The "Patient Expert" has a rich history. Movements from ACT UP to NORD proved that patient-led engagement is a rigorous discipline capable of driving systemic change through scientific fluency and strategic action. Today, there are over 18 million cancer survivors in the US alone. Only a small fraction; estimated in the low thousands; are engaged in formal advocacy training programs. This underutilization represents both a profound loss for individual survivors and a massive untapped resource for institutional and scientific change.

The Unseen Scars: What Survivorship Leaves Behind

A cancer diagnosis can precipitate a severe psychological crisis, marked by helplessness, anxiety of recurrence, and loss of identity. The profound impact of chemo fog; "I wasn't sure if my brain would work again at the capacity it once had"; is real, widespread, and almost entirely unaddressed by standard care. And a critical illness is not an individual crisis. It is a communal one, placing immense emotional, logistical, and financial strain on spouses, children, and close friends who often become caregivers and advocates without any support of their own.

Then there is the professional cliff. The combined crisis of forced career interruption; resume gaps, difficult re-entry, overwhelming medical debt; creates what researchers call "financial toxicity," a long-term loss of earning potential that compounds every other form of suffering.

Three Pathways: How Advocacy Heals

The Advocate as Scientific Expert. High-stakes advocacy functions as a powerful, real-world model for cognitive rehabilitation. Advocates consistently report that the intense cognitive demands of mastering complex science, reviewing study protocols, and understanding research design create an environment that actively challenges and may help rebuild executive function. This aligns with core principles of occupational therapy and neuroplasticity: where OT uses structured tasks to help patients regain skills for daily living, advocacy uses intrinsically motivating, high-stakes tasks to rebuild the same executive functions required for a demanding professional life.

The Advocate as Architect. Moving beyond personal challenges to design systemic solutions provides profound purpose and intellectual engagement. High-level strategic problem-solving; identifying a gap no one has named and building something to close it; is itself a form of recovery. It is also how Alvis.Care was born.

The Advocate as Community Builder. The transition from patient to advocate allows individuals to reframe their experience into a source of meaning and altruism. This aligns with core principles of cognitive behavioral therapy: the act of mentoring and supporting others actively restructures negative thought patterns. "I am a victim of my illness" becomes "My experience makes me an expert who can help others." Advocacy is, at its core, a process of re-authoring one's life story.

The Safe Environment

Structured advocacy provides a unique environment to test cognitive and professional skills before re-entering a high-stakes workforce. It is characterized by the empathy of shared experience; advocates are surrounded by peers who understand the limitations of illness without requiring explanation. It offers flexible engagement, lower professional stakes, built-in mentorship, and a focus on purpose over pressure.

What makes advocacy uniquely safe is the intrinsic link between the work itself and the advocate's personal recovery narrative. Unlike traditional support groups, which primarily facilitate processing through dialogue, advocacy channels this processing into tangible, externally-focused work. This shift from introspection to action; and the experience of creating measurable impact; uniquely reinforces agency and professional confidence. It functions as a support group in action.

The Equity Gap

The current model of unpaid advocacy, which demands significant time and resources, creates a profound equity problem. It systemically favors survivors with financial stability, flexible jobs, and geographic proximity to major medical centers. This unintentionally excludes a large portion of the survivor population; particularly those from lower-income backgrounds, communities of color, rural areas, and other marginalized groups who are often most impacted by healthcare inequities.

Without formal, compensated pathways, advocacy as a recovery tool remains a privilege, not an accessible option for all. This paper proposes three actionable models: a Stipend-Based Contributor Model ($500-$1,000 per quarter for defined commitments), a Part-Time Staff Model (formal roles at 10-20 hours per week with competitive wages), and a Project-Based Consultant Model (engaging experienced advocates at fair market consultant rates for high-stakes, time-bound projects).

The Limitations We Must Name

This framework is honest about what it does not know. There is an inherent selection bias: survivors with a higher degree of pre-existing cognitive function may be more likely to engage in advocacy in the first place. It is impossible to definitively prove, through testimony alone, that advocacy causes cognitive and emotional renewal. There is also a fundamental paradox in advocacy as recovery: the rigorous work of systemic advocacy often requires a baseline of cognitive capacity, emotional stability, and time that many survivors in acute recovery do not yet possess.

There is also the grief of peer loss; a unique and profound challenge that must be named as a serious side effect to be actively managed, not an unavoidable feature of the work. Institutions have an ethical responsibility to provide robust mental health support and the explicit cultural permission for advocates to step back when the cumulative grief becomes unmanageable.

A Call to Action and an Invitation

This paper calls for a fundamental shift in survivorship care. Institutions must move beyond a purely clinical model and formally integrate structured advocacy programs into care plans, recognizing them as a legitimate form of rehabilitation.

If you are reading this as an advocate: we want to hear from you. Do these phases and pathways match your experience? Where does this framework break? What did we miss? This paper is a draft conversation, not a finished prescription. Your feedback will shape the next iteration; and is crucial to building a model that truly serves our community.

This white paper is a working framework, developed in collaboration with the patient advocate community. Inquiries, responses, and case study contributions welcome.

Patient Advocacy Published
The 23 Hours: What Happens Outside the Clinical Encounter
Clinical excellence addresses roughly one hour a day of a patient's life. The other twenty-three are where most of the real caregiving; and most of the real failures; happen.
2026

There is a moment in serious illness when you realize the hospital is not the place where your life is happening.

The hospital is where your case is managed. Your life is happening somewhere else. In the apartment where someone is trying to figure out your medications. In the kitchen where a person who loves you is Googling things they don't understand. In the hallway outside your room where a family is whispering because they don't want you to hear them being afraid.

Clinical excellence addresses roughly one hour of a patient's day.

The other twenty-three belong to everyone else.

 

I spent months inside a cancer treatment. I watched my parents navigate a country and a health system they had never been in, trying to advocate for a son they almost lost. My father is a man who wrote a book on ethics. My mother raised children through war. Neither of them knew what to do with an insurance form. Neither of them had a contact to call at 2am when something changed and no one was answering.

They were not failing. The system was not designed for them.

It was not designed for anyone in that position.

Here is what I noticed: the clinical team was extraordinary. The oncologists were skilled and attentive. The nurses were kind. The one hour a day I had with medical professionals was often excellent care.

But that hour was surrounded by twenty-three more. And in those twenty-three hours, everything that could go wrong had the space to do so. Medication timing. Symptom changes that seemed small. The creeping exhaustion of caregivers who had no one asking how they were doing. The decisions made in the middle of the night without enough information. The feeling, slow and corrosive, that something was being missed and there was no way to know what.

That feeling is not a failure of love. It is a failure of infrastructure.

 

We talk about healthcare as if it happens in clinics. As if the building matters more than the hours. We fund research inside the hospital walls. We measure outcomes at discharge. We celebrate the interventions that happen in the one hour.

And then we send people home.

What happens next is supposed to take care of itself.

It doesn't.

The numbers are not mysterious. Most adverse events after hospitalization happen at home, in the hours between visits, in the gap between what a family knows and what they need to know. Most falls happen in familiar environments. Most medication errors happen in the spaces we consider safe. Most of the deterioration that leads to readmission was visible before it became a crisis. Someone just wasn't watching. Not because they didn't care. Because they had no tools, no support, and no sleep.

Caregivers are absorbing a system's failure and calling it love.

 

When I started building Alvis, I wasn't thinking about technology. I was thinking about those twenty-three hours. About what my parents needed that no one gave them. About what every family is improvising right now in a language they were never taught.

They needed someone in the room who knew what to look for. Who could tell them: this is normal, this is not, here is what to do. Who could see the small changes before they became emergencies. Who could give them, finally, a way to act instead of just worry.

That is not monitoring. Monitoring is surveillance with a better name. What I am describing is advocacy. The thing that happens when someone who knows the system stands between you and its indifference.

The twenty-three hours need an advocate.

Not an alarm. Not a dashboard. Not an alert sent to a coordinator who is already overwhelmed.

Someone watching. Someone who understands context. Someone who knows that this person slept less last night, moved differently this morning, hasn't eaten the way they usually do, and that these three things together mean something worth paying attention to.

 

My parents flew from Belgrade. They did everything they could.

There are millions of families right now doing everything they can. Carrying the weight of a system that was not designed to catch them. Doing the caregiving that happens in the hours no one is counting.

Those hours are where most of the real care happens.

They are also where most of the real failures happen.

That is the gap.

That is what I am trying to close.

Patient Advocacy  ·  Clinical Research Published
The Patient Advocate as Scientific Peer
A study that patients cannot complete is not rigorous science. Why lived experience belongs inside the review process; and a framework for making it a formal criterion, not a courtesy.
2026

Srdjan Stakić, EdD, MFA  |  Patient Advocate & Member, Stanford Cancer Institute Scientific Review Committee
The views expressed here are my own and do not represent the Stanford Cancer Institute or Stanford University.

The Core Argument

A study that patients cannot complete is not rigorous science. It is an elegant hypothesis that will never become reliable data.

We have built an entire infrastructure around scientific review. We evaluate statistical power. We scrutinize study design. We examine safety oversight with precision. But we often leave one critical dimension to chance: whether the protocol is actually feasible, acceptable, and understandable to the human beings we are asking to enroll.

That is not a soft concern. That is a scientific one.

When participants drop out because the visit schedule was unrealistic, the data becomes incomplete. When consent forms are written in language that patients cannot parse, informed consent is not truly informed. When equity barriers go unexamined, findings cannot be generalized. These are methodological failures. And a structured patient advocate role in scientific review is one of the most direct ways to prevent them.

The Gap in How We Define Rigor

Scientific review committees at most institutions share a working definition of rigor that covers the same core dimensions: sound rationale, well-designed methodology, appropriate statistical analysis, adequate safety monitoring. These are necessary. They are not sufficient.

Missing from most definitions is a third dimension: participant-centered feasibility. Not staffing capacity. Not service line availability. The actual lived experience of completing this protocol as a patient.

Consider what that means in practice. A protocol may be statistically elegant and scientifically coherent. But if it requires twelve clinic visits over six months from patients who are actively undergoing treatment, managing fatigue, caring for dependents, and coordinating transportation, it will fail in ways no power calculation can predict. The dropout will be high. The data will be skewed toward patients with more resources, more flexibility, more support. The findings will be less generalizable. And none of that will be visible in the protocol itself.

This is precisely the kind of gap that patient advocates are positioned to see. Not because we are more compassionate than clinicians, but because we have different information. We know what the days feel like. We know which burdens stack.

What 25 Years of Structured Advocacy Looks Like

The American Cancer Society integrated Community Research Partners into peer review in 1999. More than two decades later, they operate across 22 peer review committees and review thousands of applications each cycle. They have not compromised scientific quality. They have strengthened it.

Their model is instructive because it is structured. Community Research Partners do not offer vague sentiment at the end of a review. They evaluate specific dimensions: the clarity of the general audience summary, the articulation of cancer relevance and real-world impact. They speak immediately after primary and secondary scientific reviewers. They receive a formal scoring role that contributes to the voting range. Any committee member who votes outside that range must verbally justify the deviation.

This is not tokenism. It is accountability. It signals to every applicant that these dimensions will be evaluated formally, not as an afterthought. And that signal changes behavior before the application even arrives. When investigators know that patient-centered feasibility is a formal criterion, they design for it from the start.

The same principle governs statistical rigor. We do not get careful statistics because researchers inherently love power calculations. We get them because those calculations will be formally reviewed. Make patient-centered design a formal criterion, and it gets built in.

A Framework Any Institution Can Adopt

Redefine rigor to include participant-centered feasibility. For scientific rigor to mean what we say it means, it needs to encompass three dimensions: sound scientific rationale and study design; appropriate statistics and safety oversight; and participant-centered feasibility, burden, and communication as evaluated by patient advocates. All three dimensions should be required for a protocol to be judged acceptable. Not preferred. Required.

Give advocates a structured position in the review sequence. Patient advocates should speak after scientific reviewers and before open committee discussion. This positioning matters. It ensures that feasibility review receives the same deliberate attention as scientific review, not the scraps of time remaining after other business concludes. The advocate comments on participant burden across the full scope of the protocol, the realism of the visit and procedure schedule, the clarity and tone of patient-facing materials, and equity and access concerns that may not be visible from a clinical vantage point.

Create a standardized review instrument. A simple structured form anchors the advocate's contribution to specific evidence and makes it transferable across review cycles. It should include an overall feasibility rating, guided questions on burden, daily life impact, equity and access, and the clarity of risks and benefits in plain language. It should include space for specific suggested modifications, not just general impressions. This kind of documentation creates an institutional record. Over time, patterns emerge. The review process becomes a source of learning, not just a gate.

Establish voting accountability. Advocate scores should be part of the formal voting range, and any member voting outside that range should briefly state their rationale. This is not about giving advocates veto power. It is about giving their input the same transparent standing that any other dimension of review receives.

The Signal Effect

Right now, in most institutions, patient-centered feasibility is not a formal evaluation criterion. Investigators know this. It is not that they are indifferent to it. It is that they are rational actors responding to what will actually be reviewed. If statistical power will be formally scrutinized, they build statistical power carefully. If participant burden is not formally scrutinized, it competes with other priorities and often loses.

The moment an institution formalizes patient advocate review, the incentive structure shifts. Investigators begin designing for it. The protocols that arrive at the review committee are better before the meeting even starts. This is not a speculative claim. It is the logic behind every quality improvement system that has ever worked: define what you measure, and you change what gets built.

What I Have Seen from Inside a Review Committee

I sit on the Stanford Cancer Institute's Scientific Review Committee as a patient advocate. The observations that follow are my own, and I am not speaking for the institution or my colleagues. But I can speak from direct experience about what structured advocacy looks like in practice.

The moments where my perspective has contributed most are not the moments where I caught a scientific error. They are the moments where I could say: this visit schedule will be untenable for someone who is neutropenic and lives an hour from the clinic. This consent form describes the randomization process in language that requires a statistics background to parse. This protocol does not address what happens to participants who develop symptoms that make continued enrollment genuinely difficult, and that ambiguity will affect enrollment integrity.

Those are not soft observations. They have implications for data quality, for dropout rates, for the representativeness of the study population. They are scientific observations delivered through a different lens.

What I have also observed is how quickly this kind of input becomes part of the committee's culture when it is expected and structured, versus how easily it can be absorbed without trace when it is informal. Formalization is not bureaucracy for its own sake. It is the mechanism by which insight becomes change.

The Larger Case

There is a version of this argument that is purely pragmatic. Better patient-centered design leads to better enrollment, lower dropout, cleaner data, more generalizable findings. Formalized patient advocate review is an efficiency gain.

That case is true. It is also incomplete.

The fuller argument is about what we believe science is for. Clinical research is conducted on and with human beings who are sick, who are vulnerable, who are trusting institutions with their time, their bodies, and in some cases their lives. The least we owe them is a review process that treats their experience as a source of scientific knowledge, not a logistical afterthought.

Lived experience is data. It is often the data that tells us whether our elegant designs will survive contact with reality. A review process that treats that data as a courtesy rather than a criterion is not as rigorous as it believes itself to be.

We can change that. The framework exists. The evidence exists. The question is whether the institutions that conduct clinical research are willing to expand their definition of rigor to match what rigor actually requires.

Srdjan Stakić, EdD, MFA, is Founder & CEO of Alvis, a privacy-first AI platform for senior care advocacy, and Chair of the Patient & Family Advisory Council at Stanford Cancer Center. He serves as a patient advocate on the Stanford Cancer Institute Scientific Review Committee. He is a cancer survivor, caregiver, and former global health leader with the United Nations.

AI & Health Technology Published
Caring Environment vs. Surveillance: Why the Distinction Is Not Cosmetic
Focus groups in Palm Springs kept saying the same thing: they didn't want AI monitoring their loved ones. They wanted to feel like someone was watching over them. That difference in framing is everything.
2026

In Palm Springs, we sat in a room with families who care for aging parents.

We asked them about technology. About monitoring. About AI that could watch over their loved ones and alert them when something changed.

The room got quiet in a particular way.

Not hostile. Not dismissive. Something more careful than that. These were people who had already spent years negotiating between what their parents needed and what their parents would accept. Between safety and dignity. Between love and control.

One woman said: "My mother would never allow a camera in her home. She spent her whole life building a life that was hers."

Another: "It's not that I don't want help. I just don't want her to feel like a patient in her own bedroom."

Seventy-three percent of the people in that room eventually said yes to piloting Alvis. But not when we called it monitoring. When we called it a caring environment.

That shift is not marketing. It is the entire argument.

 

Surveillance is about the observer.

It is designed to serve the person watching. It collects data that answers the watcher's questions. It optimizes for detection, for liability, for institutional peace of mind. The person being watched is the subject. The camera doesn't know their name. The alert doesn't know their history. The system is indifferent to whether they feel seen or simply seen through.

We have built enormous amounts of healthcare infrastructure on this model. Monitoring equipment. Check-in systems. Remote patient platforms that send data upstream and call it care. The outputs are real. The detection rates are real. But something is missing from the design, and the people in that room in Palm Springs could feel exactly what it was.

They didn't want their mothers watched.

They wanted their mothers cared for.

Those are not the same thing.

 

A caring environment is about the person living inside it.

It is designed around dignity first. It learns a person's patterns not to flag deviations for an algorithm but to understand what is normal for them, specifically, as an individual with a history and a personality and a way of moving through the world. It notices that she always makes tea at 7am and didn't today. It knows that he tends to sleep restlessly before his doctor appointments. It understands context. It holds it.

The alerts it generates are not alarms. They are observations, passed to people who know how to interpret them with care rather than urgency. The goal is not to prevent every bad outcome through maximum surveillance. The goal is to keep a person connected to the people who love them, and to give those people the tools to actually help.

That is a fundamentally different design philosophy. It shows up in every decision. What you measure. How you present it. Who receives it. What you ask families to do with it.

A surveillance system optimizes for coverage.

A caring environment optimizes for trust.

 

My parents lived with me during my cancer treatment. They watched me the way only parents can, with a completeness and a grief that no sensor could replicate. They noticed everything. The way I held my body when I was in pain but didn't want to say so. The foods I could tolerate on different days. The particular silence that meant I needed company and the particular silence that meant I needed them to leave me alone.

No technology replaces that.

But here is what technology can do: it can extend that quality of attention to the hours when the people who love you cannot be present. It can give families the information they need to show up better. It can close the gap between a person's daily reality and the people who care about it.

That is the promise. Not surveillance. Not optimization. Not fall detection accuracy as an end in itself.

The promise is: someone is paying attention. Not watching. Attending.

There is a difference.

 

The families in Palm Springs already knew this. They had been living it. They had already navigated the conversation with a parent about what help they would and wouldn't accept. They already understood that safety purchased at the cost of dignity is not really safety. It is just a different kind of loss.

What they were waiting for was a technology that understood this too.

Not a camera in the bedroom.

A presence in the home.

Not a dashboard of anomalies.

A way to stay close.

The distinction is not cosmetic. It is the difference between a tool that diminishes a person and one that honors them. Between technology designed for the institution's comfort and technology designed for the human being living inside it.

We are building the second kind.

Not because it is better marketing. Because it is the only kind worth building.

Practical Guide  ·  Clinical Research Resource
Patient Advocate SRC Feasibility Assessment Guide
A structured framework for patient advocates serving on Scientific Review Committees; how to assess relevance, feasibility, burden, and equity, and how to frame your feedback so it lands as the scientific argument it actually is.
2026

This guide is designed to empower patient advocates serving on Scientific Review Committees. It provides a structured framework for reviewing clinical trial protocols; translating lived experience into a practical, scientifically valid critique that improves trial design and success.

Your Primary Role: The Lived Experience Expert

Your core mandate is to assess five things:

  • Relevance: Is this research question important to patients?
  • Feasibility: Can patients realistically participate as the study is designed?
  • Burden: Is the burden placed on participants reasonable and justified by the potential benefit?
  • Generalizability: Will these results actually apply to the real-world patients we are trying to help?
  • Equity: Does the trial design systematically exclude populations, creating both scientific bias and health disparities?
Section 1: Relevance & Significance
Research Question
  • Does this study address a question that matters to the patient community?
  • Is there a clear unmet need for this research?
Study Endpoints
  • Does the primary outcome reflect a benefit patients actually care about; survival, quality of life, symptom reduction; versus a biomarker a patient won't feel?
  • Are Patient-Reported Outcomes included? If so, are they the right ones? If not, is this a missed opportunity?
  • If this trial succeeds, will the results be meaningful enough to change clinical practice or improve a patient's life?
How to Frame It "I'm concerned that the primary endpoint may not capture what matters most to patients with this condition, which is often maintaining cognitive function or reducing daily pain; not progression-free survival as a standalone measure."
Section 2: Feasibility, Generalizability & Equity

A study that is too burdensome or exclusive will fail to enroll, produce high dropout rates, and waste resources. All of this is a core scientific validity concern; not just a fairness concern.

Eligibility Criteria
  • Are the criteria too narrow? Do they exclude real-world patients with common comorbidities, borderline lab values, or brain metastases?
  • Is there a clear scientific reason for every exclusion, or are they copy-pasted from old protocols?
  • What percentage of real-world patients with this condition would actually qualify? If it is less than 30–40%, this is a serious generalizability concern.
Your Most Powerful Scientific Critique: Generalizability If the eligibility criteria create a "perfect patient" who does not exist in the real world, the results will not translate to clinical practice. When investigators say "my patients don't look like that," the trial has already failed; even if the data is clean.
Who Is Being Systematically Excluded?
  • Patients with common comorbidities; diabetes, hypertension, prior cancers
  • Patients with borderline kidney or liver function values
  • Older adults or those with reduced performance status
  • Patients who cannot afford frequent travel, parking, or time off work
  • Patients in rural areas or without reliable transportation
  • Patients with caregiving responsibilities; single parents, those caring for aging parents
  • Patients working hourly jobs without paid leave or stable housing
Study Procedures & Schedule
  • How many visits are required and how long is each one? Is this feasible for someone who is actively in treatment?
  • What are the hidden financial costs; parking, gas, missed work, childcare, lodging? Does the protocol offer reimbursement?
  • Are all procedures scientifically necessary, or can any be consolidated or eliminated?
  • Is there a specific point in the study where you predict significant dropout?
How to Frame It "I want to flag a feasibility and equity issue. The requirement for weekly visits for three months, combined with no travel reimbursement, will systematically exclude patients who work hourly jobs, lack transportation, or have caregiving responsibilities. This is not just unfair; it means our study population will not represent the real-world population. That is a scientific validity concern."
Section 3: Language, Tone & Patient-Facing Materials

The language used throughout a protocol shapes how the research team views and treats participants. Dehumanizing language is not just an ethical issue; it undermines trust, recruitment, and retention.

  • "Subject" vs. "Participant": "Subject" is laboratory language. Flag it and recommend "participant" or "patient" throughout.
  • "Subject failed therapy" vs. "Therapy did not respond": No patient fails. Therapies fail to work. The scientifically accurate framing locates the outcome with the treatment.
  • Consent form readability: Is it written in plain language, ideally at a 6th–8th grade level? Does it clearly distinguish standard care from research? Are the burdens honestly described?
  • Overall tone: Does the protocol treat participants as essential research partners, or as data sources who are fungible and replaceable?
How to Frame It "The protocol states 'subject failed treatment' in multiple places. The scientifically accurate framing is 'the treatment did not work for this patient.' No patient fails; therapies fail to work. This language shift matters for how the team is culturally oriented toward the people making this research possible."
SRC vs. IRB: When Your Concern Gets Deflected

Use this when a committee member suggests your concern is "an IRB issue, not an SRC issue." Most patient-centered concerns are both; or are primarily SRC issues for scientific validity reasons.

Your Concern It Is How to Frame Your Response
Burdensome schedule, high costs, painful procedures SRC Issue "If this burden causes high dropout, we won't have valid data to answer the research question."
Eligibility criteria that are too strict SRC Issue "These criteria will make it very difficult to enroll a representative population, which limits the scientific value of the findings."
Design that systematically excludes by income, geography, or circumstance SRC + Equity "This is both a scientific validity issue and an equity issue. The results won't generalize and will perpetuate existing disparities in care."
Study endpoints that don't matter to patients SRC Issue "The endpoints don't capture what matters to patients, which limits the clinical utility of the findings."
Dehumanizing language; "subject," "subject failed therapy" SRC Issue "This shapes how the team views participants and undermines the trust essential for retention."
High burden AND consent form that doesn't disclose it Both "For the SRC: this burden threatens accrual. For the IRB: this burden is not clearly described in the consent form."
Venture Capital  ·  Innovation Published
The Great Inversion: A New Playbook for Venture Capital
For decades, VC backed technical founders who needed help understanding markets. AI has created the inverse: domain experts who can now build, and just need help scaling. A new founder archetype, a new deal flow, and a new portfolio strategy.
2026

Srdjan Stakić, EdD, MFA  |  Founder & CEO, Alvis.Care

For decades, the venture capital script has been predictable: find a technical prodigy, fund them with millions to hire a team, and wait two years for a product to emerge in search of a market. But a profound shift in the technology landscape is creating a new, overlooked asset class. I call it The Great Inversion. Today, AI is empowering proven domain experts; the nurses, ship captains, and factory managers with decades of experience; to build enterprise-grade solutions themselves. For the investment community, this signals a new founder archetype and a new set of rules for generating returns.

When I was fighting Stage 4 lymphoma, I experienced firsthand the devastating gaps in healthcare coordination that can mean the difference between life and death. Despite holding a doctorate in health education from Columbia and years of advocacy experience, I was powerless against the system failures that put vulnerable patients at risk. The traditional path for addressing such problems would mean writing policy papers or assembling a technical team with millions in funding.

Instead, I built Alvis myself.

My journey from cancer patient to healthcare technology founder isn't just a personal story; it's evidence of a fundamental inversion in how innovation happens. Three years ago, this would have been impossible without a technical co-founder and a multi-million dollar seed round. Today, AI-assisted development makes it a capital-efficient reality that reshapes the entire venture funding landscape.

The New Economics of Startups: From Millions to Thousands

The democratization of AI inverts the traditional startup cost structure in ways that create entirely new investment opportunities. Sophisticated large language models now allow founders to build through conversation; what AI researcher Andrej Karpathy calls "vibe coding." The critical skill is no longer writing syntax but clearly communicating intent. GitHub research shows developers using AI-assisted coding see productivity gains exceeding 55%.

For a non-engineer like myself facing a life-threatening medical crisis, this meant producing an enterprise-grade healthcare platform capable of supporting 100,000 users for $2,232 in direct costs over three months of development. A project of this scale would traditionally require $6–8 million and 18–24 months. The financial barrier to entry has effectively dissolved, replaced by focused expertise and urgent need.

De-Risking the Founder: The Virtual Advisory Board

A key innovation in this new model is the ability to de-risk execution without a large payroll. I orchestrated AI personas to simulate expertise I lacked, creating a virtual advisory board to pressure-test every decision:

CTO AI: "What are the security vulnerabilities in this architecture?"
CFO AI: "How do the unit economics work at 10,000 customers?"
Patient Advocate AI: "What are the ethical red flags in healthcare data handling?"

This process, which mirrors Dr. James Zou's research at Stanford where AI "virtual scientists" compressed months of work into days, surfaced critical flaws in minutes. For investors, this means solo founders can arrive with products that have already undergone the equivalent of months of expensive, multi-disciplinary consulting; and they've done it under the pressure of real-world urgency that traditional founders rarely experience.

The New Founder Archetype: The Community Champion

This reality produces a different founder archetype. Community Champions are domain experts building for their own professional communities, driven by problems they've lived with personally. Unlike traditional technical founders who must learn industries from scratch, Community Champions don't need to find product-market fit; they embody it.

When I was navigating cancer treatment, I wasn't theorizing about healthcare coordination problems. I was watching families struggle with the same gaps that nearly cost me my life. Every feature in Alvis addresses pain points I experienced firsthand. This isn't market research; it's lived experience driving innovation.

For VCs and incubators, Community Champions represent untapped deal flow with several structural advantages. Built-in market validation: they're solving problems they and their professional networks experience daily. Capital efficiency: they can produce viable MVPs with small investments. Speed to market: urgent personal need creates development timelines that traditional founders can't match. Industry credibility: colleagues trust solutions built by practitioners who understand their challenges intimately.

The 70/30 Investment Opportunity

Academic literature confirms domain experts can build approximately 70% of enterprise solutions using AI-assisted development. The remaining 30% is where venture capital becomes critical; and where the highest returns await.

In my case, I could build all the core healthcare logic, user interfaces optimized for medical professionals, and industry-specific features addressing real workflow problems. Alvis emerged with 240+ database tables, 158+ edge functions, HIPAA-compliant architecture, and real-time family communication systems that reflect deep healthcare domain knowledge.

But I needed professional support for technical hardening; security penetration testing, performance optimization, database architecture review. Business operations beyond healthcare expertise required guidance: pricing strategy, enterprise sales processes, regulatory navigation.

This creates a structured investment opportunity. Instead of funding teams to figure out what to build, investors can support proven solutions that need professional scaling. The 30% gap represents precisely where traditional VC value-add; business development, operational expertise, network access; becomes most valuable.

A New Portfolio Strategy: Cultivating Ponies, Not Just Unicorns

The strategic opportunity lies in augmenting the hunt for unicorns with a new objective: cultivating Ponies. These are profitable, laser-focused companies with built-in user bases and immediate paths to profitability. Unlike unicorns that require massive markets and extensive funding, Ponies serve high-value niche communities. They're built by domain experts who understand their markets intimately and can achieve profitability quickly.

Because of AI-assisted development, Ponies can generate substantial returns without tens of millions in follow-on funding. They serve specialized communities; compliance officers in financial sectors, rare disease advocates, manufacturing quality managers; that are too focused for unicorn scale but perfect for highly profitable, capital-efficient businesses.

Action Plan for Investment Leaders

Source deals from domain experts. The next great founder isn't a 20-year-old hackathon champion; she's a 20-year industry veteran who experienced a crisis and decided to build the solution herself. Look for founders at industry conferences, in trade publications, and facing real problems within your portfolio companies.

Build full-spectrum support systems. Technical hardening support becomes your core value-add. Pair security audits and scalability planning with traditional business mentorship on pricing, go-to-market strategy, and operational scaling. This addresses the 30% gap that domain experts need to bridge.

Rethink fund metrics. Build portfolios that support sustainable, profitable ventures alongside unicorn bets. Ponies can provide 3–5x returns over 18–24 months, offering more consistent cash flow to balance high-risk, high-reward investments.

Systematize virtual advisory boards. Teach founders this methodology to de-risk ventures before deploying capital. When founders arrive having already pressure-tested their solutions through AI-simulated expertise, your investment risk decreases significantly.

The Great Inversion

For decades, venture capital funded technical founders who needed help understanding markets, forcing awkward partnerships between domain experts and technical co-founders. AI has created a fundamental inversion: non-technical market experts can now build independently and just need support to scale professionally.

This shift is already happening across industries. In healthcare, patient advocates are building coordination platforms. In manufacturing, plant managers are creating quality monitoring systems. In financial services, compliance officers are automating regulatory reporting. The investment firms that adapt their sourcing, due diligence, and support models for this new wave of founders won't just unlock new sources of alpha; they'll define the next era of value creation.

The revolution is personal, urgent, and driven by people who experienced problems so directly they had no choice but to solve them. For investors, the choice is clear: lead this transformation by supporting Community Champions who embody their markets, or be disrupted by competitors who recognize that the most powerful innovations come from those who lived the problems they're solving.

Srdjan Stakić, EdD, MFA, is a Stage 4 lymphoma survivor and healthcare advocate with a doctorate in health education from Teachers College at Columbia University. He founded Alvis.Care after experiencing critical gaps in patient care coordination firsthand. At 25, the United Nations recruited him to establish Y-PEER, a global health network reaching millions across 60+ countries. He is a 2026 Mira Fellow and Chair of Stanford Cancer Center's Patient & Family Advisory Council.

Entrepreneurship  ·  AI  ·  Innovation Published
The Great Inversion: A New Playbook for Domain Expert Founders
How industry experts can build enterprise solutions without technical co-founders. A practical guide to the 70/30 split, the virtual advisory board, the three development phases; and the action plan for the professional who's ready to build the solution their industry has been waiting for.
2026

Srdjan Stakić, EdD, MFA  |  Founder & CEO, Alvis.Care  |  with some help from AI

Three years ago, when I was fighting Stage 4 lymphoma, I experienced firsthand the devastating gaps in healthcare coordination that can mean the difference between life and death. Despite having a doctorate in health education and decades of experience as a patient advocate, I was powerless to fix the system failures I witnessed. The traditional path would have meant writing policy papers or searching for a technical co-founder.

Today, that same frustration has transformed into Alvis.Care, a HIPAA-compliant platform that keeps families connected during medical crises. But here's what makes this story different: I built it myself, without a single line of traditional coding experience.

This isn't just a personal anecdote. It's evidence of what I call The Great Inversion: a fundamental shift in how software gets built that's quietly revolutionizing who can build it. For decades, the startup playbook was rigid: find a technical co-founder, give them significant equity, and hope they understand your industry well enough to build the right solution. AI has inverted this completely. Domain expertise paired with AI tools can now produce better solutions faster than traditional technical approaches.

The New Economics: From Millions to Thousands

The numbers tell the story directly. Over three months of intensive development, I spent $2,232 in direct platform costs; starting with basic subscriptions and scaling to premium tiers as complexity increased:

Lovable.dev Platform   $1,254.00
Google AI (Gemini)     $514.90
Anthropic Claude       $400.54
Supabase Hosting       $42.74
OpenAI ChatGPT         $20.00
Total                       $2,232.18

Compare this to the traditional alternative: hiring an engineering team would require $6–8 million and 18–24 months to build equivalent functionality. The financial barrier to entry has effectively dissolved, replaced by focused expertise and urgent need.

The Virtual Advisory Board

One of my most significant breakthroughs came when I realized I could orchestrate AI personas to simulate expertise I lacked, creating what I call a virtual advisory board. Instead of searching for a technical co-founder or paying expensive consultants, I developed specialized AI advisors that pressure-tested every major decision.

My CTO AI challenged architectural decisions: "What are the security vulnerabilities in this approach? How will this scale under enterprise loads?" My CFO AI scrutinized the business model: "How do the unit economics work at 10,000 customers? What happens to margins as you scale?" Patient Advocate AI flagged ethical considerations: "What are the red flags in healthcare data handling?"

This process mirrors Stanford research by Dr. James Zou, where AI "virtual scientists" compressed months of traditional research into days. Critical flaws that might take weeks to surface in traditional development were identified and addressed in minutes. For investors and founders alike, it means arriving at decisions with the equivalent of months of expensive, multi-disciplinary consulting already completed.

Understanding the 70/30 Split

Academic literature confirms domain experts using AI-assisted development can build approximately 70% of a complete enterprise solution. Understanding this split transformed how I approached the project.

The 70% I could build: all the core business logic; the workflows and processes I understood intimately from years of healthcare advocacy. User interfaces optimized for medical professionals. Industry-specific features that addressed real pain points. The HIPAA-aligned security architecture. Real-time family communication systems. Alvis emerged with 240+ database tables, 158+ edge functions, and four distinct onboarding workflows for patients, families, caregivers, and agencies.

The 30% requiring professional expertise: security penetration testing, performance optimization under enterprise loads, database architecture review, formal compliance auditing, pricing strategy, enterprise sales process design, and legal review of data handling practices. The warning signs became clear over time; when AI tools couldn't diagnose performance issues, when security vulnerabilities required specialized remediation knowledge, when business decisions fell outside my professional experience.

Three Development Phases

Phase 1: Domain-Led Discovery. I used my existing knowledge to conduct 50+ interviews with healthcare colleagues, mapping workflows with the precision only an industry insider could achieve. Customer discovery happened through existing relationships, not expensive market research. Pricing decisions were based on actual value delivered to patients and families. This phase felt almost effortless compared to traditional startup discovery; I wasn't learning the industry, I was applying it.

Phase 2: AI-Assisted Construction. What AI researcher Andrej Karpathy calls "vibe coding"; describing functionality in business terms and letting AI generate implementation. Platforms like Lovable.dev converted my requirements into working code. This wasn't traditional programming; it was translating deep industry knowledge into technical specifications that AI could execute. When features didn't quite match real-world healthcare workflows, I could articulate the specific adjustments from a practitioner's perspective.

Phase 3: Professional Hardening. This required stepping outside my expertise: hiring security professionals for penetration testing, engaging compliance consultants for healthcare-specific regulatory requirements, working with legal professionals on documentation and audit procedures. The $10,000–25,000 investment in professional services during this phase proved essential for creating a truly enterprise-ready solution.

The Community Champion Archetype

This convergence creates what I call the Community Champion: domain experts building for their own professional communities. Unlike technical founders who must learn industries from scratch, Community Champions don't need to find product-market fit; they embody it. Your professional network becomes your customer base. Deep problem understanding comes from lived experience rather than market research. Industry credibility means colleagues trust solutions built by peers who understand their daily challenges.

Forty-two percent of startups fail because they build products nobody wants; a trap domain experts naturally avoid. You're not guessing about customer needs. You're solving problems you've lived with for years.

Common Pitfalls to Avoid

Business model validation remains crucial; technical feasibility doesn't guarantee customers will pay. Plan for the 30% gap from the beginning; professional expertise in security, compliance, and business operations can't be retrofitted cheaply. Deploy early and iterate based on real customer usage; perfectionism that prevents feedback is a liability. And compliance considerations must be architectural, not afterthoughts; in healthcare, HIPAA requirements should influence every design decision from day one.

The Action Plan

Weeks 1–2: Interview 20+ colleagues about their biggest daily frustrations. Research existing solutions and identify specific gaps. Validate willingness to pay at specific price points. Assess your available time commitment; minimum 40 hours per week for 6 months.

Weeks 3–4: Evaluate AI development platforms (Lovable.dev, Cursor, Bolt.new). Practice prompt engineering with simple feature requests. Create virtual advisory board AI personas for your industry.

Months 2–3: Build core functionality using AI-assisted development. Test with a small group of trusted colleagues. Iterate based on real user experience. Document architecture decisions.

Months 4–5: Conduct basic load and performance testing. Engage security professionals for vulnerability assessment. Create compliance documentation. Build customer onboarding and support processes.

Month 6+: Deploy with a pilot customer group of 5–10 committed users. Collect detailed usage analytics. Plan marketing strategy to reach broader professional community.

The Great Inversion

The transformation from Stage 4 lymphoma patient to healthcare technology founder wasn't just personal; it represents a fundamental shift in how innovation happens. The most significant barriers to building software solutions are no longer technical. They are having the courage to begin and the clarity to communicate what you need to AI systems that can build it.

Your professional frustrations, the daily inefficiencies you witness, the problems your colleagues complain about; these aren't just workplace annoyances. They're market opportunities waiting for someone with your expertise to address them. The tools exist. The market is ready.

The Great Inversion has arrived. Domain experts now have the tools to build the solutions their industries have been waiting for. Your expertise is the foundation. AI provides the tools. Your professional community provides the market.

Start building. Your industry is waiting for the solution only you can create.

Srdjan Stakić, EdD, MFA, is a Stage 4 lymphoma survivor and healthcare advocate with a doctorate in health education from Teachers College at Columbia University. He founded Alvis.Care after experiencing critical gaps in patient care coordination firsthand. He is a 2026 Mira Fellow, Fulbright Fellow, and Chair of Stanford Cancer Center's Patient & Family Advisory Council.

White Paper  ·  AI  ·  Self-Discovery Published
The AI Prompt as Personal Self-Discovery
How working with AI reveals what we know, what we don't, and who we're becoming. On tacit knowledge, the protégé effect, the fresh-start advantage, translation as a creative act, and why the most powerful prompt is sometimes not a prompt at all.
February 2026

Srdjan Stakić, EdD, MFA  |  Founder & CEO, Alvis.Care  |  Chair, Stanford Cancer Center Patient & Family Advisory Council  |  Member, Stanford Cancer Institute Scientific Review Committee  |  February 2026

Introduction: The Unexpected Intimacy of the Prompt

There is something very human about writing a prompt for an AI system. Not in a metaphorical or aspirational sense, but in a practical, lived-experience sense. The act requires you to do something most of us rarely do in professional life: slow down, externalize your assumptions, and explain yourself completely to an intelligent entity that has no prior context about your problem.

It is, in many ways, like having a brilliant new assistant who graduated at the top of their class but started five minutes ago. They can do extraordinary things, but only if you tell them exactly what you need, why you need it, and what good looks like. There is nothing diminishing about this process. If anything, it represents one of the most cognitively rich and humanistically valuable interactions available in modern professional life.

This paper argues that working with AI; through prompts and through questions; is not merely a technical skill or a productivity hack. It is a practice of personal self-discovery. The act of instructing an AI reveals what you know. The act of questioning it reveals what you do not. And if you are open to the process, the cycle of writing, evaluating, questioning, and refining becomes a discipline that sharpens your thinking, exposes your assumptions, and expands the boundaries of your understanding.

1. The Paradox of Tacit Knowledge

The philosopher Michael Polanyi introduced the concept of tacit knowledge in 1958 with a deceptively simple observation: we know more than we can tell. A master carpenter does not consciously compute angles and grain patterns. A veteran clinician notices something "off" about a patient before lab results confirm it. A seasoned entrepreneur feels the difference between a pitch that will land and one that will not.

This knowledge is real, valuable, and stubbornly resistant to articulation. It lives in our hands, our gut, our peripheral vision. And for most of professional history, it has been transferred through apprenticeship, osmosis, and time. The gap between what you know and what you can explain has been, functionally, unbridgeable.

Prompt writing is, in essence, a technology for bridging the tacit knowledge gap. It forces you to stand on the bridge and build it at the same time.

When you sit down to write a prompt, you collide with this gap immediately. You discover that the thing you do effortlessly; the judgment call you make a hundred times a day; requires an elaborate scaffold of context, priorities, and criteria that you have never once written down. The AI does not know that "good enough" in this context means something different than "good enough" in that one. It does not share your history of what has worked and what has failed.

And so you build the scaffold. You articulate the criteria. You make the implicit explicit. And in doing so, you learn something about your own expertise that was previously invisible to you.

2. Prompt Writing as Pedagogy

There is a well-documented phenomenon in education research called the protégé effect: people learn material better when they expect to teach it to someone else. The mechanism is straightforward. Teaching requires you to organize information, identify gaps in your understanding, and anticipate where a learner might get confused. The preparation to teach is itself a form of deep learning.

Prompt writing activates this same mechanism. When you prepare instructions for an AI system, you are not dumbing down your expertise. You are performing the exact cognitive work that deepens it. You are asking yourself: What do I actually mean? What assumptions am I carrying that I have never examined? What is the difference between what I want and what I am likely to ask for?

Consider a healthcare executive writing a prompt to analyze patient satisfaction data. She does not just say "analyze this data." She finds herself specifying which metrics matter most, which timeframes are relevant, what counts as a significant trend, and what kind of language the final summary should use. In articulating these things, she is not just instructing the AI. She is crystallizing her own analytical framework in a way that might be useful to her team, her board, or her successors.

This is not incidental. It is the core of what makes prompt writing valuable beyond its immediate output. The process of instructing an AI is simultaneously a process of self-instruction.

3. The Prompt as Mirror

There is a version of prompt writing that most practitioners discover only after they have been doing it for a while, and it is the most valuable version of all. It is the moment you realize that the prompt is not really for the AI. It is for you.

When you attempt to write clear instructions for a system that takes your words literally and has no ability to fill in what you left out, you encounter your own thinking with uncomfortable precision. You discover that the strategy you thought was clear is actually vague. You find that your priorities, when forced into an explicit hierarchy, contradict each other. You notice that the thing you assumed was a fact is actually an untested belief you inherited from a previous role, a mentor, or an industry convention no one has revisited in years.

The prompt is not really for the AI. It is for you. It reveals where your thinking is sharp and where it is held together by habit and handwaving.

Consider a founder preparing to describe their product's value proposition in a prompt. They sit down to write it and discover they have three different versions in their head, each tailored to a different audience, and none of them fully compatible. The AI did not create this problem. The act of writing for the AI surfaced it. Now it can be addressed; not just in the prompt, but in the business itself.

For leaders, this has a compounding effect. The clarity you develop through prompt writing does not stay confined to your AI interactions. It bleeds into how you brief your team, how you write strategy documents, how you make decisions under uncertainty. The practice of making the implicit explicit, once developed, becomes a way of thinking that improves everything it touches.

4. The Fresh-Start Advantage

In organizational psychology, shared context is often treated as an unqualified good. Teams that have worked together for years can communicate in shorthand, finish each other's sentences, and move quickly because they share a deep reservoir of unspoken understanding. But shared context has a shadow side. It creates blind spots. It lets ambiguity survive because no one is willing, or even able, to point out that the emperor has no clothes. Decades of groupthink research confirm that shared assumptions can become invisible constraints.

An AI system has no shared context. It arrives at your problem the way a first-day consultant arrives at a new client: with intelligence, capability, and zero assumptions. This is not a bug. It is a feature of extraordinary value.

The AI's lack of shared context is not a limitation to work around. It is a lens that brings into focus the assumptions you have stopped noticing.

When you must explain everything from first principles, you often discover that some of your first principles are not actually principled. They are habits. They are inherited frameworks that no one has questioned. They are solutions to problems that no longer exist. The fresh-start nature of each AI interaction gives you permission; even compulsion; to re-examine your own thinking.

5. From Fresh Start to Shared History: The Evolving Mirror

The fresh-start dynamic is real, and for many interactions it remains the default. But the landscape is shifting. AI systems are increasingly developing the ability to maintain memory across conversations: your preferences, your projects, your communication style, your professional context, your history of what has worked and what has not. The day-one hire is becoming a long-term collaborator.

In practice, the self-development value does not diminish as shared context grows. It evolves. Early-stage prompting; the zero-context interaction; is primarily about self-articulation. As memory accumulates and shared context deepens, the interaction shifts from articulation to confrontation. An AI that knows your history can do something a fresh-start system cannot: it can notice contradictions. It can observe that the strategy you are describing today conflicts with the priorities you outlined two months ago. It can recognize patterns in your thinking that are invisible to you precisely because you are inside them.

Early-stage prompting is about self-articulation. Mature collaboration becomes about self-confrontation. The mirror gets sharper, not duller.

There is a meaningful parallel here to long-term therapeutic or coaching relationships. A good therapist in the first session asks you to describe your situation, and the act of describing it is clarifying. A good therapist in the fiftieth session asks you why you are describing it the same way you did in the fifth session, and the act of confronting that pattern is transformative. The depth of shared context does not reduce the insight. It changes the kind of insight that becomes possible.

The self-development, in other words, compounds.

6. Translation as a Creative Act

Walter Benjamin argued that a great translation does not simply convey meaning; it illuminates the original by finding new forms for its underlying structure. The translator must understand something more deeply than the casual reader in order to render it faithfully in another language.

Prompt writing is a form of translation. You are taking the messy, multidimensional, emotionally textured knowledge in your head and rendering it in a language that a different kind of intelligence can act on. This is not mechanical transcription. It requires genuine creativity. You must choose metaphors that convey not just information but intent. You must structure your instructions in a way that prioritizes what matters. You must decide what to include and, just as importantly, what to leave out.

And like all creative work, the translation is never perfect; which means it is iterative. You write a prompt, evaluate the output, refine your instructions, and try again. This feedback loop is indistinguishable from the iterative process of any creative endeavor: writing a novel, designing a building, composing a piece of music. The medium is different. The cognitive process is the same.

7. The Great Inversion: Domain Experts as Builders

For two decades, the technology industry operated under a fundamental assumption: building software required software engineers. Domain experts; the doctors, educators, advocates, and operators who understood problems most deeply; were cast as "product owners" or "stakeholders." They described what they needed. Engineers built it. Translation loss was accepted as inevitable.

AI tools, and the prompt-writing skills that power them, have inverted this relationship. The ability to articulate a problem clearly, to specify constraints and success criteria, to iterate on outputs with domain judgment: these are the skills of a domain expert, not a developer. And increasingly, they are sufficient to build real, functional, production-grade solutions.

The bottleneck was never the domain expert's intelligence. It was the translation layer between their knowledge and the tools. Prompt writing dissolves that layer.

At Alvis.Care, where I am building a 24/7 digital care advocacy platform for seniors and people with disabilities, this inversion is not theoretical. It is the operational reality. Years of caregiving experience, clinical advocacy, and patient-centered design thinking translate directly into the prompts, specifications, and interaction patterns that shape the platform. The gap between "knowing what patients need" and "building what patients need" has narrowed to the width of a well-written prompt.

8. The Art of Not Knowing: Questions as a Building Material

Most discussions of AI collaboration focus on the prompt: the instruction, the command, the carefully crafted specification. But there is an equally important mode of interaction that rarely gets the attention it deserves. It is the question. Specifically, it is the willingness to say: I do not know what I do not know. If you were in my position, what questions would you ask? What should I be paying attention to? What am I likely to miss?

The most powerful prompt is sometimes not a prompt at all. It is a confession: I do not know what questions to ask. Help me find them.

In practice, this looks like a founder saying: "I am building a healthcare platform for seniors. I have deep experience in patient advocacy but limited experience in regulatory compliance for medical devices. What questions should I be asking that I probably have not thought of?" This mode of interaction is not a sign of weakness. It is, arguably, the highest form of intellectual sophistication: the ability to recognize the boundaries of your own knowledge and to use every available tool to push beyond them. The Socratic method itself was built on the premise that the wisest person is the one who knows what they do not know.

The most productive AI collaborations I have experienced were not the ones where I wrote the most precise prompts. They were the ones where I asked the most honest questions.

9. Prompt Writing and the Tradition of Human Knowledge Transfer

Before there were schools, before there were books, before there was writing itself, there was storytelling. It is the oldest technology humans have for taking what is inside one mind and placing it inside another. Around fires, across generations, through migration and upheaval and the slow accumulation of centuries, storytelling was how knowledge survived. Every culture that has ever existed has organized its deepest knowledge not into databases or frameworks but into narratives.

Before there were schools, before there were books, before there was writing, there was storytelling. It is the oldest technology for taking what is inside one mind and placing it inside another.

My own work in global health education with the United Nations; building the Y-PEER network across more than 60 countries; was rooted in this ancient practice. We were not distributing information packets. We were training young people to become storytellers for public health: to take complex knowledge about HIV prevention, reproductive health, and human rights and translate it into narratives that resonated with their peers across different languages, cultures, and contexts. The answer was never to simplify the knowledge. It was to invest deeply in the translation.

Prompt writing belongs to this lineage. When you write a prompt, you are doing what humans have done since the first elder sat a child down and said "let me tell you about the time": you are translating your experience into a form another intelligence can receive, process, and act on. The audience is different. The medium is different. The cognitive and creative demands are remarkably the same.

There is something reassuring about this continuity. AI may be new, but the human practice it requires is as old as language itself. We are, it turns out, very good at it.

10. The Emotional Dimension

Technical discussions of prompt engineering often treat the process as purely cognitive: a matter of logic, structure, and precision. This misses something important. The act of carefully articulating what you need, why it matters, and how you will know when it is right is also an emotional experience. It requires vulnerability; you must admit what you do not know. It requires patience; the AI needs you to be thorough in a way that most professional environments no longer reward. And it can be genuinely satisfying: there is a particular pleasure in finding exactly the right way to express a complex idea, in watching an AI produce something that captures what you meant but could not have built yourself.

But perhaps the deepest emotional register of prompt writing is the one that comes after the satisfaction: the quiet recognition that you understand something about yourself that you did not understand before. You sat down to write instructions for a machine, and somewhere in the process you discovered what you actually think, what you actually value, what you are actually trying to do. This is not a side effect of the work. It is, for many practitioners, the reason they keep returning to it. The output is useful. The self-knowledge is transformative.

11. Practical Implications

Reframe "prompt engineering" as a professional competency. Organizations should stop treating prompt writing as a technical hack and start recognizing it as a form of structured thinking. The best prompt writers are not necessarily the most technical people. They are the clearest thinkers. Investing in prompt writing capability is investing in organizational clarity.

Use prompt writing as a knowledge management tool. The prompts themselves are artifacts of institutional knowledge. A well-written prompt that produces reliable results is, in effect, a codified decision framework. Organizations that collect and refine their prompts are building a knowledge base that captures expertise in a uniquely portable and testable format.

Value the process as much as the output. When leaders write their own prompts rather than delegating to a technical intermediary, they gain insights into their own thinking that no amount of delegated AI use can provide. The cognitive benefits of prompt writing accrue to the writer, not the reader of the output.

Recognize domain experts as natural AI collaborators. The Great Inversion suggests that the people who should be writing prompts; and by extension building AI-powered solutions; are the people who understand the problem domain most deeply. This has implications for hiring, team design, and organizational structure that most companies have not yet absorbed.

Cultivate the practice of strategic questioning. Train people not just to prompt AI effectively but to question it strategically. The ability to say "what am I missing" and to use AI as a sounding board for unexplored assumptions may be the single highest-leverage AI skill available today. It requires no technical background, only intellectual honesty and the willingness to treat the boundaries of your knowledge as starting points rather than endpoints.

Conclusion: The Most Human Interface

We are accustomed to thinking about AI through the lens of automation: what can the machine do that humans used to do? This framing misses the reciprocal question: what does working with the machine require humans to do that they would not otherwise have done?

The answer, in the case of prompt writing, is clarify. Articulate. Translate. Question. Discover. These are not mechanical activities. They are among the most distinctively human capabilities we possess. They require self-awareness, empathy, creativity, and intellectual honesty. They get easier with practice but never become trivial, because every new problem requires a new act of translation, and every new question opens a territory you did not know was there.

In an era of justified anxiety about what AI will take from us, it is worth pausing to notice what AI, in this particular dimension, gives back. It gives us a reason to slow down and think about what we actually know and what we are trying to accomplish. It gives us a mirror not only for our expertise but for our goals, forcing us to ask whether what we say we want and what we are actually building are the same thing.

The technology is artificial. The self-discovery it makes possible is as personal and as real as anything we do.

References: Benjamin, W. (1923). The task of the translator.  ·  Chase et al. (2009). Teachable agents and the protégé effect.  ·  Janis, I. L. (1972). Victims of groupthink.  ·  Jung, C. G. (1959). The archetypes and the collective unconscious.  ·  Nonaka & Takeuchi (1995). The knowledge-creating company.  ·  Polanyi, M. (1958, 1966). Personal knowledge; The tacit dimension.  ·  Rogers, C. R. (1961). On becoming a person.  ·  Schon, D. A. (1983). The reflective practitioner.  ·  Sennett, R. (2008). The craftsman.  ·  Vygotsky, L. S. (1978). Mind in society.

White Paper  ·  Alvis  ·  Clinical Infrastructure Published
Patient Mental Models as Clinical Infrastructure
Modern healthcare produces clinically accurate records while patients leave visits confused and unable to act. The problem is not health literacy; it is mental model misalignment. An Alvis white paper on why adaptive, layered summaries are necessary for safe and humane care, with full case study and appendices.
December 2025

Srdjan Stakić, EdD, MFA  |  Founder & CEO, Alvis.Care  |  Chair, Stanford Cancer Center Patient & Family Advisory Council  |  Member, Stanford Cancer Institute Scientific Review Committee  |  v0.8 · December 2025

Executive Summary

Modern healthcare systems produce clinically accurate, legally defensible records, yet patients routinely leave visits confused, overwhelmed, or unable to act on what was decided. This gap is not primarily a failure of intelligence, education, or compliance. It is a failure of mental model alignment. When this translation fails, care fails; even when clinicians do everything right.

Alvis is built on a simple but under-recognized insight: patients do not organize their health around diagnoses, billing codes, or provider specialties. They organize it around daily actions, energy, fear, logistics, and meaning. When medical information is not translated into the patient's lived mental model, clinical intent is lost, adherence suffers, and safety risks increase.

Key Definitions

Mental Model Alignment: The degree to which medical information is organized around how a patient actually thinks about their health, rather than how clinical systems categorize it. Misalignment occurs when accurate information fails to be actionable because it does not map to the patient's lived questions.

Capacity Signals: Observable indicators during a clinical conversation that reveal a patient's current cognitive bandwidth, emotional state, areas of resistance, and preferred framing. These signals inform how information should be presented, not what information should be given.

Layered Clinical Meaning: A framework in which a single clinical encounter produces multiple outputs, each optimized for a different audience and a different core question: "What do I do?" vs. "What should I watch for?" vs. "What happened clinically?"

1. The Problem: Clinical Accuracy Without Comprehension

Patient portals and After Visit Summaries are designed around how healthcare systems function: diagnosis codes, medication lists, referrals, and orders. These documents are optimized for legal defensibility, billing, and clinical traceability. They are not optimized for how patients actually make sense of their health. As a result, patients often receive information that is technically correct but not actionable in its current form. The risk is not misinformation. The risk is misalignment.

A patient who cannot determine which pill to take in the morning, whether to fast before a lab, or which floor to visit for a specialist appointment is at real clinical risk; even if the correct information exists somewhere in the record.

What This Is Not. This is not summarization. Summarization reduces length while preserving content. What patients need is translation plus prioritization under capacity constraints. The content may remain the same length or even expand where clarification is needed. Existing AI-powered visit summaries have made progress on readability; reducing grade levels and replacing jargon. But readability is not the same as alignment. A document can be written at a 6th-grade level and still be organized around the wrong questions.

Case Study: Patient A. Patient A is a 77-year-old woman managing hypertension, atrial flutter requiring anticoagulation, osteoarthritis, asthma, and insomnia. She visited her primary care clinician accompanied by her adult son, who serves as her caregiver advocate. The 40-minute visit covered blood pressure medication adjustments, joint pain management, sleep medication review, and coordination with multiple specialists. After the visit, she received a standard After Visit Summary.

From Patient A's After Visit Summary (patient-facing sections):

AFTER VISIT SUMMARY: PATIENT-FACING SECTIONS Today's medication changes
START taking: hydroCHLOROthiazide (HYDRODIURIL)
CHANGE how you take: metoprolol succinate ER (TOPROL XL) - how much to take

Labs ordered today
ANA IFA SCREEN W/RFLX TO TITER/PATTERN
CYCLIC CITRULLINATE PEPTIDE (CCP) ANTIBODY IGG
RHEUMATOID FACTOR (RA) QUANTITATIVE
URINALYSIS WITH MICROSCOPIC EXAM

This document is clinically accurate. It is also not actionable in its current form. Patient A cannot determine from this text why her medication changed, what "ANA IFA SCREEN" means, or why these labs were ordered. The information exists, but it is not organized around any question she would actually ask.

2. The Limits of Traditional Health Literacy

Health literacy research has made important contributions by advocating for simpler language, lower reading levels, cultural tailoring, and language translation. However, these approaches operate at the population level; they assume that comprehension is primarily a function of readability and vocabulary. In practice, many patients who can easily decode medical language still struggle to act on it. The limiting factor is not reading ability, but cognitive load, emotional state, energy, trust, and context.

During her visit, Patient A demonstrated strong health literacy in some areas and clear overwhelm in others. When her clinician mentioned the diuretic by its generic name, she immediately translated it as "the water pill." When asked about physical therapy, she sighed and said, "I need to go to four or five different doctors now." Her constraint is not comprehension; it is bandwidth and prioritization under load. A system that responds to this by simplifying vocabulary misses the point entirely.

3. Mental Models: How Patients Actually Organize Care

Patients do not think in ICD-10 codes or problem lists. They think in questions: What do I need to do today? Which pills do I take in the morning versus at night? Do I need to fast for this test? Where do I go, and how hard will it be to get there? What can I safely ignore for now?

Clinical conversations make the mental model gap visible. Consider this exchange from Patient A's visit:

TRANSCRIPT EXCERPT

Clinician: Rheumatologist not gonna help you.

Patient A: It will not help me?

Clinician: It's only for like... like autoimmune sort of arthritis. So if you have like rheumatoid arthritis. If you want I can check for that.

Patient A: Oh, do you know better?

In Patient A's mental model, a rheumatologist is "a doctor for joint pain." In the medical system's model, a rheumatologist is "a specialist for autoimmune inflammatory conditions." These are not the same thing, and Patient A's confusion is not a failure of intelligence. It is a failure of translation between mental models. The clinician resolves this in real time by ordering blood tests; but this resolution does not appear clearly in the After Visit Summary, which simply lists the lab orders without context.

4. Conversation Reveals Capacity Better Than Documentation

The spoken interaction between patient and clinician reveals how much cognitive bandwidth the patient has, which concepts resonate or confuse, where the patient resists or negotiates, and what the patient rephrases in their own words. Importantly, the absence of discussion does not mean the absence of clinical reasoning. Clinicians often capture full intent in the medical record rather than the conversation. Therefore, conversation reflects patient capacity; not clinical completeness.

What the Patient Said Capacity Signal
"I need to go to four or five different doctors now." Overwhelmed by logistics; adding more appointments will meet resistance
(Sighs when asked about trying a topical pain gel) Skeptical of yet another intervention; needs permission to not act
"I can tell you something good. My asthma is excellent." Needs wins acknowledged; not just problem-focused
"I will go with you together." (to caregiver, about exercise) Prefers companion-based activity over clinical settings

None of these signals appear in the After Visit Summary. A system that generates patient-facing content solely from EHR data will miss them entirely.

5. Why Medical Records Still Matter

Relying solely on transcripts risks underrepresenting clinical intent. Clinicians routinely consider contingencies, risks, and follow-up plans that are not spoken aloud but are documented in progress notes and orders. A safe patient-facing system must therefore draw from both sources: conversation, to understand the patient's mental model and capacity; and medical records, to ensure clinical completeness and safety.

Patient A's transcript captures rich detail about pain management and blood pressure. But several clinically important items received minimal verbal discussion: the full list of upcoming appointments (seven in the next three months), specific test names for the rheumatology workup, the rationale for checking liver function before recommending higher-dose Tylenol, and physical therapy scheduling instructions. These details exist in the EHR. A transcript-only system would leave Patient A without information she may need.

6. Mental Model Alignment as a Clinical Safety Issue

Mental model misalignment produces concrete clinical failures: missed labs when a patient does not understand why a test was ordered; wrong medication timing when a patient cannot parse the instructions; failed specialist scheduling when a patient does not understand which specialist to see or why; duplicate or conflicting medications when a patient sees multiple providers and cannot track changes; avoidable ED visits when a patient does not understand warning signs.

Healthcare has developed rigorous safety protocols for medication reconciliation, surgical time-outs, and allergy verification. Mental model alignment belongs in this category.

Like medication reconciliation, mental model alignment addresses a failure mode that occurs not because anyone was wrong, but because the system did not account for translation loss. This positions it as error prevention, not patient engagement.

7. The Alvis Framework: Layered Clinical Meaning

Alvis proposes a three-layer model of post-visit communication, each aligned to a different mental model and purpose:

Layer Audience Core Question Characteristics
For Me The patient "What do I do?" Short, actionable, uses patient's own language
For My Family Caregivers, advocates "What should I watch for?" More complete, includes rationale and red flags
For My Records Clinicians, institutions "What happened clinically?" Full documentation, legal traceability

These layers are not redundant. They are complementary. Attempting to collapse them into a single document fails all audiences simultaneously. The "For My Records" layer remains the source of truth for all legal, billing, and care coordination purposes. The patient and family layers are communication aids derived from that record, not substitutes for it.

Infrastructure Parity. Current clinical infrastructure is designed for clinicians (EHR notes, clinical decision support, order entry), institutions (billing codes, legal documentation, quality metrics), and care coordination (referral systems, medication reconciliation). Patients benefit from these systems, but indirectly. Alvis proposes that patient-facing infrastructure should exist at the same level of importance as clinician-facing and institution-facing infrastructure; not as a replacement for existing systems, but as a peer layer.

Zero Provider Workflow Change. Alvis requires no change to clinician workflow. It operates on artifacts that already exist: the ambient audio recording, the EHR data generated through normal documentation, and the After Visit Summary produced by standard EHR workflows. Clinicians do not dictate to Alvis, configure Alvis, or modify their documentation practices. The system is additive, not interruptive. The only exception is contradiction handling: when the system detects a conflict between transcript and EHR, it requests clinician clarification before releasing the patient summary.

8. Adaptive Translation as Clinical Intervention

Translation between mental models is not cosmetic. It is a clinical intervention. Consider how a physical therapy referral might be communicated to two different patients:

Patient B: Proactive, Engaged Patient A: Overwhelmed
"Schedule your physical therapy appointment at [clinic name]. First available is usually 2–3 weeks out. Video visits are also available and equally effective." "Your clinician put in an order for Physical Therapy for your back pain. You do not have to go right away, but the referral is in the system if you decide you want to try it later."

Both statements are accurate. Both are safe. But they respect different patient capacities. Patient A, who expressed resistance to more appointments, receives permission to defer. Patient B receives logistical support to act immediately. Both preserve the same clinical intent, but only one of each pair is likely to be followed.

9. Limitations and Safety Considerations

Transcripts can be incomplete due to audio quality, interruptions, or conversations that continue after recording ends. EHR notes can contain copy-forward errors, outdated problem lists, or templated language that does not reflect the specific encounter. Critical actions; items that, if missed, could result in patient harm; must be flagged for visual emphasis regardless of overall simplification level.

Most importantly: not all clinical encounters generate transcripts suitable for capacity-informed personalization. Patients in acute emotional distress, cognitive crisis, or conflict with their provider may generate transcripts that reflect dysregulation rather than comprehension style. Clinicians who are rushed or inattentive may produce conversations that never allow the patient to express themselves. Personalizing based on such a transcript could amplify harm rather than reduce it.

Alvis therefore includes a conversation quality assessment. When quality is below threshold, the system falls back to: default health literacy guidelines rather than personalized framing; routing to a designated caregiver rather than the patient directly; flagging for human review before release; or generating from EHR data alone with no transcript-informed personalization. This is not a limitation to be solved later; it is a core safety requirement.

10. Evidence Base and Research Agenda

This framework synthesizes established evidence with novel contributions. The health literacy gap is well-documented across decades of research. AI's ability to improve readability of patient materials is demonstrated in recent studies from NYU Langone and JMIR publications. Established safety practices confirm that structured information-transfer protocols reduce preventable harm.

The framework's novel contributions; the three-layer output model, mental model alignment as a safety practice, infrastructure parity as a design principle, and transcript + EHR reconciliation with capacity-informed framing; represent new proposals that generate testable hypotheses. The research agenda includes: validating capacity signal detection from transcripts; randomized controlled trials measuring medication adherence and lab completion; chart review studies confirming the contradiction-detection protocol; and implementation studies examining whether the infrastructure parity reframing changes institutional resource allocation.

Conclusion

Healthcare does not fail most often because clinicians are wrong. It fails because meaning is lost between intention and action.

Patient A left her visit with accurate clinical documentation. But accuracy alone did not tell her which pill to restart, why physical therapy was optional but the urine test was not, or how to think about her conversation with the cardiologist next month.

A healthcare system that cannot reliably translate clinical intent into patient action is incomplete, no matter how advanced its clinical capabilities.

Alvis is built to close that gap. By treating patient mental models as a first-class layer of clinical infrastructure, it enables care that is not only accurate, but usable, humane, and safe.

Appendices

Appendix A; "For Me" Summary (Patient-Facing)

This is the patient-facing summary optimized for Patient A's mental model. It answers "What do I do?" in language that matches their own.

Medication Updates
RESTART
Water pill (Hydrochlorothiazide)
Start taking this again, 1 pill per day. This will help bring down the swelling in your legs and lower your blood pressure.
CONTINUE
Heart/blood pressure pill (Metoprolol)
Keep taking 1 pill per day. This protects your heart.
Pain Management
Avoid Advil while you are on blood thinners. Tylenol is safe: up to 1000 mg at one time. Voltaren Gel: you can try this on your knees and hands.
To-Do List
1. Lab Work (today, no fasting needed)
Urine test and blood test. The blood test checks if your joint pain might be from rheumatoid arthritis.

2. Schedule Cardiology
Go to the Cardiology Department to schedule your appointment with the new heart doctor.

3. Physical Therapy (optional)
The referral is in the system if you decide you want to try it later. No rush.

4. Urologist appointment
Keep your appointment scheduled for next week.

Next Visit: With your primary care clinician in about 2 months. Come sooner if blood pressure stays high.
Appendix B; "For My Family" Summary (Caregiver-Facing)

This is the caregiver-facing summary. It answers "What should I watch for?" and includes clinical rationale, scheduling details, and coordination notes.

STARTING
Diuretic (hydrochlorothiazide 25mg daily)
Previously stopped after a low blood pressure episode on a more aggressive regimen. Now restarting at lower overall medication load to address both persistent hypertension and leg edema. Watch for: dizziness when standing, especially in the first week.
CONTINUING
Beta blocker (metoprolol 50mg daily)
Staying at 50mg for cardiac protection beyond BP control. The medication also provides rate control for atrial flutter.
CONTINUING
Anticoagulation (apixaban / Eliquis)
Typically lifelong for atrial flutter. Question for cardiologist: Given history of blood in urine, what is the risk/benefit of continuing long-term?
Tests Ordered
Urinalysis: Patient requested this to document current status before urology appointment.
Rheumatology panel (ANA, RF, CCP): Checking for autoimmune arthritis. If positive, clinician will refer to rheumatology. If negative, confirms osteoarthritis which is managed differently.
Upcoming Appointments
Next week: Urology (new patient). Bring urine test results. Discuss blood in urine history.
Next month: Cardiology (new patient). Key questions: Eliquis duration, BP management strategy.
2 months: Primary care follow-up. BP check, rheum lab results.
Also scheduled: Sleep medicine, Endocrinology follow-up.
Things to Watch For
Dizziness/lightheadedness: May indicate BP dropping too low with new diuretic.
Worsening leg swelling: Should improve; if worsens or with shortness of breath, seek evaluation.
Visible blood in urine: Given history, should prompt urgent urology contact.
Back pain with new symptoms: Numbness, weakness, or bowel/bladder changes require immediate evaluation.
Appendix C; "For My Records" Schema (Clinical Layer)

This appendix describes the structure of the comprehensive clinical record layer. The "For My Records" layer is designed for care coordination, legal documentation, and situations where the complete clinical picture is required.

Component Contents
Patient DemographicsName, DOB, MRN, visit date/time, provider, location
Diagnoses AddressedICD-10 codes with descriptions for all conditions discussed
VitalsBP, pulse, temp, weight, BMI; trending from recent encounters
Physical ExaminationDocumented findings by system (general, lungs, CV, MSK, psych)
Complete Medication ListAll active medications with doses, frequencies, indications, and change dates
Orders PlacedLabs, imaging, referrals, prescriptions with order details
Clinical NotesAssessment and plan by diagnosis, clinical reasoning, follow-up instructions
Appointment ScheduleAll scheduled appointments with dates, times, locations, and preparation instructions

References: Berkman et al. (2011). Low health literacy and health outcomes. Annals of Internal Medicine.  ·  Coleman et al. (2006). The Care Transitions Intervention. Archives of Internal Medicine.  ·  Freeman (2012). The origin, evolution, and principles of patient navigation. Cancer Epidemiology.  ·  IOM (2004). Health Literacy: A Prescription to End Confusion.  ·  Johnson-Laird (1983). Mental models. Harvard University Press.  ·  Kutner et al. (2006). The health literacy of America's adults. NCES.  ·  Nutbeam (2000, 2008). Health literacy as a public health goal.  ·  Reinhard et al. (2012). Home alone: Family caregivers. AARP.  ·  The Joint Commission (2023). National Patient Safety Goals.  ·  Zaretsky et al. (2024). Generative AI to transform discharge summaries. JAMA Network Open.

Personal Essay  ·  Survivorship Published
All My Clothes On
Last night I stood on a stage at Stanford and told a room full of critical care physicians about the worst year of my life. Two years ago, I was given three months. I had no intention of stopping.
2026

Last night I stood on a stage at Stanford and told a room full of critical care physicians about the worst year of my life.

Two years ago, I collapsed in a parking lot. Stage four cancer. Lungs, kidneys, intestines, all major bones. Three months to live.

Three months in the hospital. Seven rounds of sepsis. Multiple ICU stays. I couldn't remember the year. I couldn't form a sentence. I couldn't recognize people I loved.

Last night, I couldn't stop talking.

The event was called Critical Moments, hosted by Stanford Medicine's Critical Care team. They invited donors and supporters for an immersive evening: simulations, discovery stations, faculty talks on sepsis and resuscitation and consciousness. Then they asked me to close the night by sharing what critical illness looks like from inside the gown.

I told them about the morning the care team pulled me out of my room during rounds, stood me in the circle, and introduced me by name. Not by my diagnosis. By my name, my degree, my work. How that single act gave me back my sense of self at a moment when I had lost it entirely.

I told them about my family standing outside the ICU door, watching through a small window, while my oncologist said "I just don't know." About my parents showing up in full-body isolation suits and the moment I understood, even through the fog, that things were bad.

I told them about a red sweatshirt. About losing so much weight that it fit perfectly. About walking the ICU hallway and running into my nurse, who didn't recognize me. And the look on her face when she did.

I told them that I still wear the bracelet my niece made me in the ICU. Haven't taken it off.

And I told them that it is really strange to stand in front of that many doctors while having all my clothes on.

They laughed. One of the physicians told me that humor is a very good prognostic sign.

Here's what I didn't say on stage, because it wasn't my moment to pitch. But I'll say it here.

Everything I experienced in that ICU; the identity loss, the family standing at windows, the system that works brilliantly in crisis and fails quietly at home; is why I built Alvis. A 24/7 digital care advocacy and wellness monitoring platform for seniors and people with disabilities. Every feature traces back to something I lived.

I built it with AI-assisted development tools. No engineering team. A cancer survivor with a health education degree and a film production background, building healthcare technology that would have traditionally required millions and years.

The people closest to the problem are now the people most capable of solving it.

Two years ago, I was given three months.

Today, I'm in remission. I'm building. I'm caring for my parents. I'm chairing Stanford Cancer Center's Patient and Family Advisory Council. I'm standing in rooms full of doctors, fully clothed, telling my story so that it might change how someone else gets cared for.

Twelve months of remission became eighteen. And I have no intention of stopping.

07

Current Roles

Stanford Cancer Center Chair, Patient & Family Advisory Council
Stanford Cancer Institute Member, Scientific Review Committee
Stanford University Member, Community Advisory Board for Clinical Research
Stanford Emergency Department Member, Patient & Family Advisory Council
Mira Fellowship 2026 Fellow; Alvis.Care Palm Springs Pilots
Alvis.Care Founder & Chief Executive Officer
08

The Journey

Belgrade · Early Years
Flees Yugoslavia Alone at 15

As war broke out in Bosnia, Srdjan, 16, sought refuge with an American family he had never met. Rebuilt from zero. Helped his family immigrate two years later.

University of Michigan
BA in Biopsychology and Cognitive Science

Laid the scientific foundation that would underpin thirty years of health advocacy and, eventually, a technology platform.

Columbia Teachers College
MA in Health Education · Go Ask Alice

Worked at one of the first Internet-based health education platforms. His thesis asked whether the Internet could change health behavior internationally. The answer defined the next decade.

United Nations Population Fund
Establishing Y-PEER Across 60+ Countries

Joined UNFPA as its youngest staff member. Built Y-PEER, authored training materials in 15 languages, and created the UNFPA Special Youth Programme. Many Y-PEER participants became leaders of the Arab Spring.

Fulbright Fellowship · Slovak Republic
Comenius University Institute for Public Policy

Teaching fellowship recognized his intellectual contributions to global health education and peer-led methodology.

François-Xavier Bagnoud Center · UMDNJ
Director, CDC-Funded PMTCT Program

Led national HIV prevention programs in Botswana and Haiti. Botswana became the first high-burden African country validated for eliminating mother-to-child HIV transmission.

Yale School of Public Health
Yale Center for Public Health Preparedness

Designed and evaluated emergency response trainings for frontline public health workers, under the mentorship of Dr. Linda Degutis, later Director of the CDC's National Center for Injury Prevention.

USC · UCLA · Universal Pictures
MFA, Film Finance, and Documentary Work

Earned an MFA from USC's Peter Stark Program. Produced PBS documentary Dreams of Daraa in Syrian refugee camps in Jordan. Built the conviction that narrative is not a soft tool, it is one of the most powerful behavior change mechanisms available.

Personal Health Crisis
Stage 4 Lymphoma; and Surviving It

Experienced from the inside what he had spent his career trying to prevent. His sister called EMS, took him to the Stanford ED, and never left his sight. His parents flew from Belgrade. Together they navigated a tangled health system, trying to advocate for a brother and son they almost lost. What saved him wasn't only medicine. It was people who refused to let him disappear into the machinery of care.

Stanford · Present
Patient Advocacy Leadership and Alvis.Care

Chairs Stanford Cancer Center's PFAC, sits on Stanford Cancer Institute's Scientific Review Committee, and serves on multiple advisory boards. As 2026 Mira Fellow, preparing Alvis.Care for Palm Springs pilots. He is not finished.

Let's Build Something

The distance closes when someone shows up.

Whether you're an investor, a healthcare partner, a researcher, or someone who wants the systems to work better, the door is open.