@percyliang avatar

@percyliang

@percyliang

Associate Professor of Computer Science, Stanford University; Director, Center for Research on Foundation Models (CRFM)

Domain Expertise:
Machine LearningNatural Language ProcessingAI Safety and Alignment
Detected Biases:
Advocacy for open-source AI developmentFocus on ethical AI challenges like copyright and fair use
95%
Average Truthfulness
1
Post Analyzed

Who Is This Person?

Percy Liang is a leading AI researcher and academic specializing in machine learning and natural language processing. He is currently an Associate Professor of Computer Science at Stanford University and serves as the Director of the Center for Research on Foundation Models (CRFM). His work focuses on developing robust, fair, and interpretable AI systems, as well as advancing open-source AI initiatives. Recent activities include leading the Marin project, an open lab for collaborative AI development, publishing papers on foundation models and AI attribution (e.g., detecting model derivations in October 2025), and contributing to discussions on open AI ecosystems. He has received prestigious awards such as the Presidential Early Career Award for Scientists and Engineers (2019) and maintains an active presence in AI research communities.

How Credible Are They?

95%
Baseline Score

Percy Liang is highly credible as a verified academic expert in AI, with consistent professional affiliations across platforms and no evidence of controversies or inaccuracies. His Twitter activity complements his scholarly work, promoting informed discourse on foundation models and AI ethics without sensationalism. Influence is significant within tech and research circles, bolstered by awards and collaborations, making him a reliable source for AI-related topics.

Assessment by Grok AI

What's Their Track Record?

Percy Liang has a strong track record of academic rigor, with no documented fact-checks, corrections, or controversies related to misinformation. His publications are peer-reviewed and highly cited, supported by grants from reputable sources like NSF and Open Philanthropy. Tweet patterns show thoughtful, evidence-based commentary on AI topics, often linking to papers or projects, with no history of retracted claims or disputes. As an established researcher, his output emphasizes transparency and reproducibility in AI development.

What Have We Analyzed?

Recent posts and claims we've fact-checked from this author