VernacuLab

VernacuLab is a research consultancy dedicated to transforming AI technology into products that provide utility and value across industry and for the public.

About

VernacuLab is a research consultancy dedicated to helping organizations optimize the value of their advanced technology and increase collective understanding of how AI is transforming our culture and society.

At VernacuLab, we think the most pressing questions about AI are human. Our services focus on these factors, including:

Consulting

AI Governance

Trustworthy AI

Risk Management

Operating and Monitoring AI in Deployment

Analysis of Online Narratives

Markup of Human Dialogues with AI Chatbot

Feedback Loops in Technology Use

Securing Proprietary Content

Research

Testing and Evaluation

AI Risk Management

Proving AI Uses Cases

Experience and Expertise

Advisory

American Bar Association President’s Task Force on Artificial Intelligence and the Law

National Academy of Sciences

Evaluation Programs

Program Lead: NIST Assessing Risks and Impacts of AI (ARIA)

ARIA Program Webpage

The Assessing Risks and Impacts of AI (ARIA) Program Evaluation Design Document (2024)

The NIST Assessing Risks and Impacts of AI (ARIA) Pilot Evaluation Plan (2024)

(ARIA) Program Video

Documents

Guidance

Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, NIST Trustworthy and Responsible AI, National Institute of Standards and Technology, (2024)

Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, Special Publication (NIST SP), National Institute of Standards and Technology, (2024)

Video Presentations

Podcast Appearances

Resources

Recent Presentations

Keynote Speaker, IEEE ProComm June 2024

Featured Speaker, Carnegie Mellon Convening on Operationalizing the NIST Risk Management Framework (July 2023)

Invited Workshop Presenter: Sociotechnical Approaches to Measurement and Validation for Safety in AI (July 2023)

Fireside Chat: Using the AI RMF, 2023 Insurance Public Policy Summit (May 2023)

Fireside Chat: Using AI RMF to manage the risks of Generative AI, Harvard Berkman Klein Center (May 2023)

Schwartz, R. (2024) Informing an Artificial Intelligence risk aware culture with the NIST AI Risk Management Framework. Chapter. Artificial Intelligence: Legal Issues, Policy, and Practical Strategies Edited by Cynthia H Cwik, Christopher A Suarez, and Lucy L Thomson. American Bar Association.

Slaughter, I., Greenberg, C., Schwartz, R., & Caliskan, A. (2023). Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition. Conference on Empirical Methods in Natural Language Processing.

Recent Publications

Qin, H., Kong, J., Ding, W., Ahluwalia, R., El Morr, C., Engin, Z., Effoduh, J.O., Hwa, R., Guo, S.J., Seyyed-Kalantari, L., Muyingo, S.K., Moore, C.M., Parikh, R., Schwartz, R., Zhu, D., Wang, X., & Zhang, Y. (2023). Towards Trustworthy Artificial Intelligence for Equitable Global Health. ArXiv, abs/2309.05088.

Daniel Atherton, Reva Schwartz, Peter C. Fontana, Patrick Hall (2023) The Language of Trustworthy AI: An In-Depth Glossary of Terms. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Artificial Intelligence AI 100-3

Gleaves, L.P., Schwartz, R., and Broniatowski, D.A. (2020) The Role of Individual User Differences in Interpretable and Explainable Machine Learning Systems, arXiv:2009.06675

Contact

Interested in working together? Fill out some info and we will be in touch shortly. We can’t wait to hear from you!

Resources

  • Lorem ipsum lorem ipsum etc this is soem tecxt and then it continues.

  • Description text goes here
  • Item description
  • Description text goes here