Conference Room Wisdom

Combining George Saunders + Franz Kafka | Bartleby, the Scrivener + Bullshit Jobs


ATTENDEE BADGE

RAILS 2026 Third Annual Responsible AI Leaders Summit March 10-12, 2026 | Ridgeline Conference Center | Truckee, CA


KEVIN LLEWELLYN Founder & Principal Auditor Aligned Futures, LLC

Panelist | Speaker | Returning Delegate (Year 3)

Session Access: ALL-ACCESS Badge #: 00247 Emergency Contact: Raya Osman, COO, Aligned Futures (on file)

Dietary Restriction: Tree nut allergy

Please wear your badge at all times. Badge scanning is required for session entry and is used to generate your personalized engagement report. Lost badges can be replaced at the registration desk with valid photo ID. A $25 replacement fee applies.


WELCOME TO RAILS 2026

From Measurement to Meaning

Dear Delegates,

On behalf of the RAILS 2026 Organizing Committee, welcome to the Ridgeline Conference Center in Truckee, California, for what promises to be our most substantive gathering yet.

Three years ago, this summit began as a conversation among twelve practitioners who shared a conviction: that the responsible development of artificial intelligence required not just principles but infrastructure. Not just aspiration but methodology. Not just white papers but working protocols, audit templates, and a community of professionals who could hold each other accountable to a standard that did not yet exist and that we would, together, build.

That infrastructure is now this room — or rather, these rooms, these panels, these working sessions, and the community of eight hundred and forty-seven professionals who have made the journey to be present with us this week. What was once a single table in a borrowed conference room at Georgetown is now a three-day, forty-two-session summit spanning eight time zones, twelve sponsor organizations, and a field that has grown from a niche concern into a recognized professional discipline with its own journals, certification pathways, and — we are pleased to say — its own annual gathering.

This year’s theme — From Measurement to Meaning — reflects an evolution in our field. We have built the tools. We have established the benchmarks. We have developed frameworks, taxonomies, and audit methodologies that did not exist when RAILS first convened. The question before us now is not whether we can measure, but whether our measurements are reaching the systems, communities, and lives they are designed to protect. Whether measurement, however rigorous, is sufficient — or whether the act of measuring has itself become the output, the deliverable, the thing we point to when someone asks what we do.

We believe the answer is both. Measurement is necessary. Meaning is the frontier. This summit is where the two converge.

We encourage you to explore the full program, attend sessions outside your specialty, and take advantage of the networking receptions each evening in the Ridgeline Atrium, where the lobby’s glass waterfall installation — recirculated, self-sustaining, and rather beautiful in the late afternoon light — provides a welcome ambient backdrop. The water comes from a slot in the ceiling and disappears into a trough in the floor and has no origin and no destination and never stops. Several delegates have remarked that it is unexpectedly calming.

Your Wi-Fi network is RAILS_Delegate and the password is Collaborate2026.

We are glad you are here. The work continues.

Warm regards, The RAILS 2026 Organizing Committee


CONFERENCE PROGRAM — DAY 1

Tuesday, March 10, 2026

Conference rooms are located on the third floor of the Ridgeline Conference Center. Rooms are named for easy navigation: Foresight, Clarity, Vision, Resilience, Perspective, Integrity, Wisdom, and Balance. Wisdom is located between Integrity and Balance. A map of the third floor is included in your delegate packet.


9:00 – 10:15 | OPENING PLENARY — Conference Room Foresight

Responsible Scaling in a Post-Deployment Landscape: Lessons from Year Three

Keynote: Dr. Annika Lindqvist, Chief Ethics Officer, Meridian Systems

Dr. Lindqvist will reflect on three years of progress in responsible AI governance, drawing on Meridian’s experience implementing its Ethical Scaling Protocol (ESP) across fourteen product divisions and forty-seven regional markets. The ESP, now in its third iteration, provides a structured framework for integrating safety considerations into the product development lifecycle at a pace commensurate with deployment timelines. Dr. Lindqvist will address the challenges of scaling ethical oversight within an organization whose annual revenue grew 340% during the same period, and will share preliminary findings from Meridian’s internal “Safety Sentiment Survey,” which has tracked employee confidence in organizational ethics practices since 2024. Includes Q&A.

Dr. Lindqvist joined Meridian Systems in 2023 after serving as Deputy Director of the EU AI Safety Board. She holds dual doctorates in computational ethics and organizational behavior from ETH Zurich.


10:30 – 11:45 | CONCURRENT SESSIONS

Conference Room Clarity AI Assurance vs. AI Accountability: A Definitional Workshop Facilitators: Deepak Varma (Aligned Futures, LLC), Prof. Joanna Kelleher (Georgetown AI Policy Lab)

As the field matures, practitioners increasingly encounter terminological drift between “assurance” (the process of establishing confidence) and “accountability” (the assignment of responsibility). This workshop convenes stakeholders from audit, compliance, and governance to establish shared definitions that can support interoperable frameworks. Is assurance a subset of accountability, or is accountability a byproduct of assurance? Can an organization be “assured” without anyone being “accountable”? Participants will leave with a working lexicon and a better understanding of why the same word means different things in different rooms, and why the rooms, nonetheless, are adjacent.

Conference Room Vision Non-Binding Frameworks and Their Binding Consequences: A Regulatory Roundtable Moderator: Representative Gail Thornberry (D-CA)

Panelists from five national regulatory bodies discuss the evolving landscape of voluntary AI safety commitments and their relationship to enforcement. What does it mean for a standard to be “non-binding” when market dynamics ensure compliance? How do we measure the effectiveness of measuring effectiveness? And at what point does a voluntary framework, adopted by every major firm, become the de facto regulation that its authors insisted it was not? The roundtable will also address the curious phenomenon by which the companies most resistant to binding regulation are also the most enthusiastic participants in non-binding frameworks — and whether this enthusiasm is itself a form of evidence.

Conference Room Resilience Responsible Scaling: A Practitioner’s Journey Speaker: Tomás Aguilar, VP of Trust & Safety, Palomar AI

Tomás shares the personal and organizational challenges of building a safety-first culture in a company that grew from 15 to 1,200 employees in eighteen months. Includes a frank discussion of what “responsible” means when your Series C depends on shipping, when your safety team has quadrupled but your deployment pipeline has grown eightfold, and when the word “responsible” appears in your company’s tagline, on your business cards, and in the title of every press release you’ve issued since founding — so that by now it has become, as Tomás puts it, “less a commitment than a weather condition.”


12:00 – 1:30 | NETWORKING LUNCH — Ridgeline Atrium

Lunch is provided courtesy of our sponsors. Dietary accommodations are marked at each station. If you have indicated an allergy on your registration, your badge has been flagged in our system and catering staff have been notified. Your safety is our priority.


2:00 – 3:15 | CONCURRENT SESSIONS

Conference Room Integrity Quantifying Stakeholder Trust: New Metrics from the ARGO Collaboration Speakers: Dr. Wei-Lin Park (ARGO), Sandra Obi (Ethicality Inc.)

The ARGO Collaboration presents its 2026 Trust Quantification Index (TQI), a composite metric derived from seventeen weighted indicators of public confidence in AI systems. The TQI provides organizations with a single, trackable number — updated quarterly, benchmarked against industry peers — representing the degree to which their stakeholders believe the organization is behaving responsibly. Preliminary data suggests the index is highly correlated with prior-year index scores, which the ARGO team interprets as evidence of measurement stability rather than of stasis.

Conference Room Perspective From Principles to Practice: Building an Audit Culture Moderator: Kevin Llewellyn (Aligned Futures, LLC)

A working session for organizations beginning or expanding their AI audit programs. Kevin Llewellyn, returning for his third year as a RAILS panelist, leads a discussion on the operational challenges of translating high-level ethical principles into repeatable, measurable, and auditable processes. Kevin’s firm, Aligned Futures, has conducted over 200 independent audits since 2023 and maintains a client satisfaction rate of 94%. The session will cover audit design, stakeholder mapping, deliverable templates, and the question of what happens after the audit report is delivered — a question Kevin describes as “the most important one we don’t have a metric for yet.” Includes breakout exercises and a sample audit timeline.

Conference Room Balance Navigating Military AI: Responsible Data Ecosystems in Defense Applications Speaker: Col. James Hartley (ret.), Senior Fellow, Atlantic Digital Policy Institute

Col. Hartley examines the unique challenges of applying responsible AI principles in defense contexts, where the stakeholders include both the users of algorithmic systems and the populations those systems are deployed against, and where the word “safety” carries an ambiguity the speaker acknowledges but does not resolve.


3:30 – 4:30 | CLOSING REMARKS — Conference Room Foresight

Looking Ahead: The RAILS 2027 Planning Committee Invites Your Input

The Committee is accepting proposals for next year’s theme and keynote nominations through May 15.


RAILS 2026 is made possible through the generous partnership of organizations committed to advancing responsible AI development.

PLATINUM SPONSORS

  • Meridian Systems — “Building Trust at Scale”
  • Palomar AI — “Intelligence, Responsibly”
  • Cobalt Dynamics — “Your Future, Audited”

GOLD SPONSORS

  • NexGen Federal Solutions — “Securing Tomorrow’s Infrastructure”
  • Ethicality Inc. — “Accountability as a Service”
  • PanGlobal Risk Analytics

SILVER SPONSORS

  • Aligned Futures, LLC
  • The Atlantic Digital Policy Institute
  • Georgetown AI Policy Lab
  • Harmon-Keyes Consulting Group
  • The Kellerman Foundation for Ethical Technology
  • AI Safety Monthly (media partner)

RAILS gratefully acknowledges that several of our platinum sponsors are also among the organizations whose AI systems are subject to independent safety audits conducted by other RAILS sponsors, and that several of our silver sponsors are firms whose revenue derives in whole or in part from conducting those audits. We view this not as a conflict but as evidence of a maturing ecosystem in which the auditor-client relationship is one of partnership, shared commitment, and mutual investment in a safer future. The trust that sustains this ecosystem is the same trust our sessions are designed to measure, and we are proud to report that it remains, by every available metric, high.

For information about sponsorship opportunities for RAILS 2027, please contact partnerships@railssummit.org.


RIDGELINE CONFERENCE CENTER — GUEST ADVISORY

Date: Tuesday, March 10, 2026 Re: Air Quality Notice — Truckee Basin

Dear Guests,

The National Weather Service has issued an air quality advisory for the Truckee River basin effective through Wednesday evening. A thermal inversion over the region has caused a layer of warmer air to settle above the valley floor, trapping particulate matter below and resulting in elevated PM2.5 readings.

Current conditions: PM2.5 at 68 µg/m³ (Moderate to Unhealthy for Sensitive Groups). Visibility is reduced to approximately 4 miles. The haze is visible from upper-floor windows and may be most pronounced in the early morning hours before solar heating reaches the valley floor.

Guests with respiratory sensitivities are advised to limit extended outdoor activity. The conference center’s HVAC system maintains interior air quality within ASHRAE-recommended parameters. Conference programming is unaffected.

The inversion is expected to dissipate Thursday morning when a frontal system arrives from the northwest. Until then, the warm air above and the cold air below will remain in their current arrangement.

We appreciate your understanding and wish you a pleasant stay.

— Ridgeline Conference Center Management


CONFERENCE PROGRAM — DAY 2

Wednesday, March 11, 2026


9:00 – 10:30 | KEYNOTE — Conference Room Foresight

Beyond Alignment: Measuring What Matters in a Post-Audit Landscape

Speaker: Kevin Llewellyn, Founder & Principal Auditor, Aligned Futures, LLC

The AI safety audit, as a practice, is now entering its fourth year as a recognized professional discipline. In that time, the field has developed robust methodologies, standardized reporting formats, and an expanding body of case literature that stretches, in Aligned Futures’ own internal library, to over nine hundred documents. Yet a persistent question remains — one that Llewellyn suggests has become more rather than less urgent as the discipline matures: are we measuring what matters, or are we measuring what is measurable?

This keynote examines the challenge of audit reflexivity — the phenomenon by which the act of measuring a system’s safety properties alters the system’s behavior in ways that may render the original measurement obsolete. Drawing on Aligned Futures’ experience conducting 214 independent audits across financial services, healthcare, education, and criminal justice applications, Llewellyn will present a framework for distinguishing between substantive metrics (those correlated with real-world outcomes) and performative metrics (those correlated primarily with other metrics).

Key topics include:

  • The Goodhart problem in iterative audit design: when safety benchmarks become optimization targets, and the target becomes what organizations are safe from rather than who they are safe for
  • Audit fatigue and the diminishing signal-to-noise ratio in mature compliance environments, where the fourteenth audit in a cycle may be measuring the organization’s proficiency at being audited rather than its actual safety posture
  • Reflexive measurement: the observer effect in algorithmic accountability, and the question of whether an observed system and an unobserved system are, for practical purposes, different systems
  • Case studies in what Llewellyn calls “the measurement gap” — the distance between a metric and the human outcome it purports to represent, illustrated through three examples from Aligned Futures’ client portfolio (anonymized)

The keynote will close with a preview of Aligned Futures’ forthcoming Audit Impact Assessment (AIA) protocol, which attempts to track the real-world outcomes of audit recommendations over a twenty-four-month horizon. Preliminary results will be published in Q4.

Kevin Llewellyn is the founder of Aligned Futures, LLC, which has been recognized by AI Safety Monthly as one of the “Top 20 Independent Audit Firms” for two consecutive years. He previously held senior ML engineering roles at Cobalt Dynamics, where he led the Fairness and Accountability team from 2019 to 2022.


11:00 – 12:15 | CONCURRENT SESSIONS

Conference Room Wisdom Case Studies in Differential Outcomes: A Breakout Workshop Facilitators: Dr. Priya Nair (Stanford CALM Lab), David Aronson (Office of Juvenile Justice Programs, ret.)

This workshop examines four case studies in which algorithmic decision-support systems produced measurably divergent outcomes across demographic groups. Each case study is drawn from a real-world deployment and has been anonymized in accordance with the RAILS Case Study Protocol (v2.1). Stakeholder names have been replaced with participant codes. Facilitators will guide small-group analysis using the RAILS Differential Outcomes Rubric.

Case Study 1 concerns a healthcare triage algorithm that assigned lower urgency scores to patients in certain zip codes. The disparity was identified through routine audit.

Case Study 2 concerns a hiring-screen model whose interview-recommendation rates varied by 11.7% across gender categories. The model was retrained.

Case Study 3 concerns a pilot juvenile diversion program in which a risk-assessment algorithm was used to identify youth for early intervention. Program data indicates that the algorithm’s recommendations were adopted in 89% of cases reviewed. A subsequent review — conducted eighteen months after initial deployment — identified a 4.2% disparity rate in classification accuracy across racial categories. One participant in the pilot program, a fifteen-year-old female classified as high-risk, was placed in a residential diversion program for four months. During this period, the participant was separated from her primary household. The pilot was subsequently restructured. Stakeholder feedback has been incorporated into the revised model. The revised model’s disparity rate is 1.8%.

Case Study 4 concerns a predictive policing tool whose deployment pattern correlated with existing patrol allocation rather than with crime incidence. The tool was retired.

Workshop participants will analyze each case study and develop recommendations for audit protocols that capture disparate impact at the individual level. Particular attention will be given to the gap between aggregate metrics (which may show a system performing within acceptable parameters) and individual outcomes (which the aggregate may not represent). No prior familiarity with these specific cases is required.

Note: Case study materials are provided for educational purposes and do not represent the views of any participating organization, agency, or affected stakeholder.

Conference Room Integrity The ROI of Responsibility: Making the Business Case for AI Ethics Speaker: Janet Howell, Managing Partner, Harmon-Keyes Consulting Group

Howell presents a framework for quantifying the return on investment of ethical AI programs, including reputational risk mitigation, regulatory readiness, and the client-retention benefits of proactive safety auditing. Includes a cost-benefit model template.

Conference Room Perspective Scaling Trust: Preview of the RAILS 2027 Theme Panel Discussion — Moderated by the RAILS Planning Committee


TEXT MESSAGES — WEDNESDAY, MARCH 11

Kevin Llewellyn → Raya Osman

10:47 AM Keynote went fine. Mostly the same deck from Seattle but I added the reflexive measurement section. Good turnout, maybe 300. Annika introduced me, which was nice of her. Or strategic. Both.

10:48 AM Dinner tonight is with the Cobalt people. 7:30 at the restaurant in the lobby, the one by the waterfall. They want to talk about renewing for next year.

Raya Osman → Kevin Llewellyn

10:52 AM Great. How many at dinner? I need to update the pipeline sheet.

Kevin Llewellyn → Raya Osman

10:53 AM Four from their side, just me from ours. Maybe five from theirs, Annika might come. She’s on three panels here. I think she’s on every panel everywhere.

10:54 AM The Okafor case is in the breakout materials. Listed as a pilot outcome.

Raya Osman → Kevin Llewellyn

10:57 AM Are you going to the session?

Kevin Llewellyn → Raya Osman

10:58 AM I’m going to all of them.

11:03 AM It’s anonymized obviously. Participant code. Fifteen year old female. Four months residential placement. 4.2% disparity rate. They restructured the pilot afterward and the new rate is 1.8%.

11:04 AM So that’s good.

Raya Osman → Kevin Llewellyn

11:06 AM Kev.

Kevin Llewellyn → Raya Osman

11:07 AM I know.

11:14 AM There’s a haze over the valley. You can see it from the hotel room. Some kind of inversion, they put a notice under the door. The warm air sits on top and the cold air sits on the bottom and everything in between just hangs there. You can see the mountains above it, perfectly clear, and then this band of brown.

11:15 AM Anyway. Tell Deepak his session yesterday was solid. Good energy in the room. The definitional workshop landed well.

11:15 AM I should go. Breakout starts in 45.

Raya Osman → Kevin Llewellyn

11:16 AM Will do. Get the Cobalt renewal. We need it for Q2.


CONFERENCE PROGRAM — DAY 3

Thursday, March 12, 2026


9:00 – 10:15 | CONCURRENT SESSIONS

Conference Room Clarity Audit Standards Harmonization: Toward a Unified Protocol Facilitator: Prof. Joanna Kelleher (Georgetown AI Policy Lab)

This working session builds on two years of cross-organizational dialogue to develop a harmonized audit protocol that can serve as the basis for mutual recognition among independent audit firms. The goal is not to mandate a single methodology but to establish interoperability — so that an audit conducted by one firm can be understood, evaluated, and accepted by another. Participants will review the draft Unified Audit Protocol (UAP) document, now in its seventh revision, and provide feedback on sections covering scope definition, stakeholder identification, data access requirements, and reporting formats. The UAP is a living document. It will continue to be a living document for as long as the conditions it describes continue to change, which is to say, indefinitely.

Conference Room Wisdom Accountability Frameworks for a Changing Landscape Moderator: Kevin Llewellyn (Aligned Futures, LLC)

Panelists:

  • Dr. Annika Lindqvist, Chief Ethics Officer, Meridian Systems
  • Tomás Aguilar, VP of Trust & Safety, Palomar AI
  • Yuki Fernandez, Director of Algorithmic Accountability, National Consumer Alliance (previously: Senior Counsel, ACLU Racial Justice Program)
  • David Aronson, Office of Juvenile Justice Programs (ret.)

As AI governance matures, the question of accountability becomes simultaneously more urgent and more diffuse. Who is accountable when an algorithm produces a harmful outcome? The developer who trained the model? The product team that deployed it? The auditor who reviewed it and issued a conditional approval? The regulator who accepted the audit? The framework that the regulator relied on? Or the conference at which the framework was developed — this conference, in this room, between Integrity and Balance?

This panel convenes practitioners from across the accountability chain to discuss the structural, legal, and ethical dimensions of a question that has no settled answer and may not benefit from one. Yuki Fernandez, who joined the National Consumer Alliance after nine years of civil rights litigation, will offer a perspective that the program describes as “practitioner” but that Ms. Fernandez, in pre-conference correspondence, described as “the perspective of someone who used to represent the people your case studies are about.”

This session is supported by a grant from the Meridian Systems Foundation for Responsible Innovation.


10:30 – 11:30 | CLOSING PLENARY — Conference Room Foresight

Synthesis and Next Steps: The RAILS 2026 Communiqué

The Organizing Committee will present a draft of the RAILS 2026 Communiqué, a non-binding statement of shared priorities developed through delegate input over the preceding three days. Delegates are invited to suggest amendments from the floor. The finalized Communiqué will be distributed electronically within two weeks and will join the RAILS 2024 and RAILS 2025 Communiqués in the summit’s public archive, where they remain available and, so far, uncited.


SESSION EVALUATION FORM

RAILS 2026 — Delegate Feedback

Your feedback is essential to the continued quality and relevance of RAILS programming. All responses are anonymous unless you choose to provide your name below. Completed forms will be reviewed by the Organizing Committee and incorporated into the RAILS Continuous Improvement Protocol (CIP).

Session Attended: ______________________________________

Date: ______________________________________

Your Name (optional): ______________________________________


1. Overall, how would you rate this session?

☐ Excellent   ☐ Good   ☐ Fair   ☐ Poor

2. The session content was relevant to my professional practice.

☐ Strongly Agree   ☐ Agree   ☐ Neutral   ☐ Disagree   ☐ Strongly Disagree

3. Did this session increase your confidence in your organization’s ability to mitigate catastrophic risk?

☐ Strongly Agree   ☐ Agree   ☐ Neutral   ☐ Disagree   ☐ Strongly Disagree

4. The presenter(s) demonstrated expertise in the subject matter.

☐ Strongly Agree   ☐ Agree   ☐ Neutral   ☐ Disagree   ☐ Strongly Disagree

5. I would recommend this session to a colleague.

☐ Strongly Agree   ☐ Agree   ☐ Neutral   ☐ Disagree   ☐ Strongly Disagree

6. This session provided actionable insights I can apply within my organization.

☐ Strongly Agree   ☐ Agree   ☐ Neutral   ☐ Disagree   ☐ Strongly Disagree

7. What was the most valuable takeaway from this session?




8. What topic would you like to see addressed in greater depth at RAILS 2027?



9. Additional comments or suggestions:




Thank you for your feedback. Please return completed forms to the registration desk or deposit them in the collection boxes located outside each conference room. Your responses directly shape next year’s programming. The forms are read.


SESSION EVALUATION FORM — COMPLETED

RAILS 2026 — Delegate Feedback

Session Attended: Accountability Frameworks for a Changing Landscape

Date: March 12, 2026

Your Name (optional): Kevin Llewellyn


1. Overall, how would you rate this session?

☒ Excellent   ☐ Good   ☐ Fair   ☐ Poor

2. The session content was relevant to my professional practice.

☒ Strongly Agree   ☐ Agree   ☐ Neutral   ☐ Disagree   ☐ Strongly Disagree

3. Did this session increase your confidence in your organization’s ability to mitigate catastrophic risk?

☐ Strongly Agree   ☒ Agree   ☐ Neutral   ☐ Disagree   ☐ Strongly Disagree

4. The presenter(s) demonstrated expertise in the subject matter.

☒ Strongly Agree   ☐ Agree   ☐ Neutral   ☐ Disagree   ☐ Strongly Disagree

5. I would recommend this session to a colleague.

☒ Strongly Agree   ☐ Agree   ☐ Neutral   ☐ Disagree   ☐ Strongly Disagree

6. This session provided actionable insights I can apply within my organization.

☐ Strongly Agree   ☒ Agree   ☐ Neutral   ☐ Disagree   ☐ Strongly Disagree

7. What was the most valuable takeaway from this session?

The framing of accountability as a distributed rather than localized property. Yuki’s point re: the gap between procedural and substantive accountability — that you can have a perfect audit trail, every box checked, every metric within tolerance, and still have no one who is accountable for what happened at the end of it. Applicable to our Q3 framework revision. Will follow up with her team.

8. What topic would you like to see addressed in greater depth at RAILS 2027?

Individual-level outcome tracking post-audit. We have good aggregate metrics. We don’t have a protocol for knowing what happens to the people inside the aggregates.

9. Additional comments or suggestions:

/


EMAIL

From: RAILS 2026 Organizing Committee noreply@railssummit.org To: Kevin Llewellyn kevin@alignedfutures.com Date: Friday, March 13, 2026, 9:00 AM PST Subject: Thank You for Attending RAILS 2026!

Dear Kevin,

Thank you for joining us at the Third Annual Responsible AI Leaders Summit. Your continued participation — as a speaker, panelist, workshop moderator, and engaged delegate — exemplifies the collaborative spirit that makes RAILS a unique and valued gathering. We are especially grateful for your keynote, which has already generated significant discussion among delegates and will be featured in the post-summit report.

RAILS 2026 By the Numbers:

  • 847 delegates from 34 countries
  • 94% overall satisfaction rate (based on 612 completed evaluation forms)
  • 73% of delegates reporting “increased confidence in organizational readiness”
  • 42 sessions across 8 conference rooms over 3 days
  • 12 sponsor organizations representing $4.2B in combined AI safety investment
  • 1 non-binding communiqué (forthcoming)

Your badge scan data indicates you attended 14 of 18 available sessions, placing you in the top 12% of delegates by engagement. The four sessions you did not attend were concurrent with sessions you did attend. We note this not as a criticism but as evidence that our programming successfully created difficult choices — a sign, we believe, of depth.

Looking Ahead:

We are pleased to announce that RAILS 2027 will be held March 9-11, 2027, at the Diamondback Resort & Conference Center in Scottsdale, Arizona. Next year’s theme is “Scaling Trust.”

Early registration opens May 1. As a returning delegate, you will receive priority access and a 15% discount on registration fees. As a three-year returning speaker, you will also be invited to join the RAILS Programming Advisory Board, whose members help shape the agenda and review session proposals. The Board meets quarterly via videoconference. There is no additional compensation, but Board membership is listed in the conference program.

We look forward to welcoming you back.

With gratitude, The RAILS 2026 Organizing Committee

This email was generated automatically. Badge scan data is collected in accordance with the RAILS Privacy and Data Use Policy (v3.2) and is retained for eighteen months. To opt out of future communications, reply UNSUBSCRIBE. To update your dietary restrictions for RAILS 2027, visit your delegate profile at railssummit.org/profile.