VernacuLab

About

VernacuLab is a research consultancy that helps organizations navigate the challenges of implementing AI in their own settings.

VernacuLab is led by Reva Schwartz. With a 20 year career in technology evaluation, Reva works at the intersection of technology and society.

LinkedIn | Google Scholar


Services

Consulting

AI Governance

Trustworthy AI

AI Risk Management

Operating and Monitoring AI in Deployment

Analysis of Online Narratives

Markup of AI Chatbot Dialogues

Feedback Loops in Technology Use

Securing Proprietary Content

Research

AI Risk Measurement

Bespoke T&E Suites

Scoring & Rubric Development and Implementation

Test & Evaluation


Experience and Expertise

Advisory

American Bar Association President’s Task Force on Artificial Intelligence and the Law

National Academy of Sciences

Evaluation Programs

Program Lead: NIST Assessing Risks and Impacts of AI (ARIA)

ARIA Program Resources 

(ARIA) Program Video

Guidance

Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, NIST Trustworthy and Responsible AI, National Institute of Standards and Technology,
(2024)

Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, Special Publication (NIST SP), National Institute of Standards and Technology,
(2022)

Schwartz, R., Waters, G., Amironesei, R., Greenberg, C., Fiscus, J., Hall, P., Jones, A., Jain, S., Godil, A., Greene, K., Jensen, T., Schulman, N. (2024) The Assessing Risks and Impacts of AI (ARIA) Program Evaluation Design Document. National Institute of Standards and Technology.

Schwartz, R., Fiscus, J., Greene, K., Waters, G., Chowdhury, R., Jensen, T., Greenberg, C., Godil, A. Amironesei, R., Hall, P., Jain, S (2024) The NIST Assessing Risks and Impacts of AI (ARIA) Pilot Evaluation Plan. National Institute of Standards and Technology.

Schwartz, R. (2024) Informing an Artificial Intelligence risk aware culture with the NIST AI Risk Management Framework. Chapter. Artificial Intelligence: Legal Issues, Policy, and Practical Strategies Edited by Cynthia H Cwik, Christopher A Suarez, and Lucy L Thomson. American Bar Association.

Slaughter, I., Greenberg, C., Schwartz, R., & Caliskan, A. (2023). Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition. Conference on Empirical Methods in Natural Language Processing.

Qin, H., Kong, J., Ding, W., Ahluwalia, R., El Morr, C., Engin, Z., Effoduh, J.O., Hwa, R., Guo, S.J., Seyyed-Kalantari, L., Muyingo, S.K., Moore, C.M., Parikh, R., Schwartz, R., Zhu, D., Wang, X., & Zhang, Y. (2023). Towards Trustworthy Artificial Intelligence for Equitable Global Health. ArXiv, abs/2309.05088.

Daniel Atherton, Reva Schwartz, Peter C. Fontana, Patrick Hall (2023) The Language of Trustworthy AI: An In-Depth Glossary of Terms. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Artificial Intelligence AI 100-3

Recent Publications


Recent Appearances

Schwartz, R. Panelist: AI Ethics in Financial Services Summit (April 2024)

Schwartz, R. Speaker: National Symposium on Equitable AI in Practice: Impacts, Risks and Opportunities, Morgan State University (April 2024)

Black, E., Cooper, A., Heidari, H., Koepke, L., Raji, D, Schwartz, R., Governance & Accountability for ML: Existing Tools, Ongoing Efforts, & Future Directions, Tutorial NeurIPS 2023 (December 2023)

Schwartz, R. Panel Session: Federal Policy and Governance Considerations for LLMs/Generative AI at LLMs/Generative AI in Health and Medicine An Issue Framing Conversation, National Academies of Medicine Leadership Consortium: Digital Health Action Collaborative (October 2023)

AI Governance: A Conversation with Reva Schwartz of the National Institute of Standards and Technology (NIST) about NIST's new AI Risk Management Framework, American Bar Association (September 2023)

Schwartz, R. Co-presenter: Showcasing NIST’s Research in Trustworthy AI, GovAI Summit (October 2024)

Assessing AI's Risks and Impacts: A Conversation with NIST's Reva Schwartz, The Privacy Advisor Podcast (August 2024) 

Keynote Speaker, IEEE ProComm. (June 2024)

Sponsor Presentation, National Academy of Sciences Human and Organization Factors in AI Risk Management 
(May 2024)

Schwartz, R. Panel Session: Enabling US Leadership in Artificial Intelligence for Weather, National Academies Board on Atmospheric Sciences and Climate  (May 2024)

Schwartz, R. Panel Session: Lessons Learned: Operationalizing NIST’s AI RMF and Other Governance Frameworks at IAPP Global Privacy Summit 2024 (April 2024)

Featured Speaker, Carnegie Mellon Convening on Operationalizing the NIST Risk Management Framework
(July 2023)

Invited Workshop Presenter: Sociotechnical Approaches to Measurement and Validation for Safety in AI (July 2023)

Fireside Chat: Using the AI RMF, 2023 Insurance Public Policy Summit
(May 2023)

Fireside Chat: Using AI RMF to manage the risks of Generative AI, Harvard Berkman Klein Center (May 2023)

International Association of Privacy Professionals about NIST AI RMF
(March 2023)


Contact

Interested in working together? Fill out some info and we will be in touch shortly. We can’t wait to hear from you!

info@vernaculab.org