Governance scores
TECH100 ranking and filters.
Tech100 AI Ethics & Gov Index โ Constituents
Showing 25 of 108 companies (preview). Verify your email to unlock the full list.
| Rank | Weight | Company | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | Composite AI Governance & Ethics Score | Details |
|---|---|---|---|---|---|---|---|---|---|
| 1 | 4% |
Microsoft
(MSFT)
|
100.0 | 96.0 | 100.0 | 100.0 | 96.0 | 98.0 |
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 95.0 | 95.0 | 95.0 | 90.0 | 85.0 | 92.0 |
| 2025-04-01 | 95.0 | 95.0 | 95.0 | 95.0 | 88.0 | 94.0 |
| 2025-07-01 | 97.0 | 95.0 | 95.0 | 95.0 | 92.0 | 95.0 |
| 2025-10-01 | 98.0 | 95.0 | 95.0 | 96.0 | 92.0 | 95.0 |
| 2026-01-01 | 100.0 | 96.0 | 100.0 | 100.0 | 96.0 | 98.0 |
Cisco (CSCO)
Held steady at #2 versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 93.8 versus 90.6 previously (+3.2). Score movement was primarily driven by Regulatory Alignment (+5.0), Transparency (+4.0), and Stakeholder Engagement (+3.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included In Q4 2025, Cisco took notable steps in responsible AI education and risk management. It launched new AI training programs ("Cisco AI Business Practitioner" and "AI Technical Practitioner") focusing on responsible and ethical AI use for. Cisco also released its 2025 AI Readiness Index (Oct 2025), highlighting governance and security as key differentiators of AI-leading. Furthermore, Cisco introduced an Integrated AI Security & Safety Framework - a holistic framework to manage AI risks (adversarial threats, content safety failures, etc.) in enterprise. No ethical publicly reported events were reported; Cisco's updates indicate proactive governance efforts rather than reactive publicly discussed concern; Cisco is aligning with emerging AI regulations and standards. In October 2025, Cisco's Chief Legal Officer and AI teams likely monitored the EU AI Act developments. Cisco's AI Readiness Index noted that Pacesetter firms integrate AI into security and identity systems and are well-equipped to "control and secure AI agents" - this reflects alignment with regulatory expectations around AI risk controls. Cisco is also a signatory of the. White House voluntary AI commitments (Cisco was among companies that later committed to AI safety measures in 2023-24). Moreover, Cisco's new framework explicitly covers organizational governance and compliance as part of AI risk, indicating it's preparing for compliance with broad AI risk management norms (like NIST and ISO AI standards). Cisco has not been cited for any regulatory reported allegations of non-compliance regarding AI. Instead, it often advises governments (., participating in AI policy forums). For instance, Cisco executives have joined discussions on AI governance frameworks for critical infrastructure (though no specific public filing was found post-Oct 2025). Overall, Cisco appears proactive: addressing AI risks (security, bias) now to meet or exceed future regulations; and No specific AI ethics publicly reported events involving Cisco were noted in late 2025. One potential gap is the limited public disclosure of Cisco's internal AI oversight mechanisms (., whether Cisco has a formal chief AI ethics officer or published AI ethics report). While Cisco's external initiatives are relatively extensive (based on public disclosures), there is scant information on its internal audits or bias testing results. Additionally, as a major networking firm enabling AI in many industries, Cisco could face future scrutiny on how its AI-powered solutions (like security AI) handle privacy or bias - but no publicly discussed concerns have arisen yet. On the positive side, Cisco's absence of reported publicly reported events in Q4 2025 suggests its precautionary approach (embedding governance early) may be effective.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 90.0 | 90.0 | 92.0 | 88.0 | 90.0 | 90.0 |
| 2025-04-01 | 90.0 | 90.0 | 92.0 | 88.0 | 90.0 | 90.0 |
| 2025-07-01 | 90.0 | 90.0 | 92.0 | 88.0 | 90.0 | 90.0 |
| 2025-10-01 | 91.0 | 91.0 | 92.0 | 88.0 | 91.0 | 91.0 |
| 2026-01-01 | 95.0 | 93.0 | 94.0 | 93.0 | 94.0 | 94.0 |
Alphabet (Google) (GOOGL)
Held steady at #3 versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 89.4 versus 89.0 previously (+0.4). Score movement was primarily driven by Transparency (+6.0), Regulatory Alignment (-6.0), and Stakeholder Engagement (+4.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included After October 2025, Google has not launched new public AI ethics policies, but it has been implementing previously announced frameworks and responding to regulatory pressures. Earlier in 2025, Google updated its AI principles by removing certain prohibitions - notably allowing some AI work in military domains under oversight - a significant shift from its 2018. By late 2025, Google has focused on compliance with the EU AI Act: it confirmed participation in the EU's voluntary AI Code of Practice to align with upcoming, even as it voiced concerns about over-regulation slowing. Google DeepMind (an Alphabet unit) in Nov-Dec 2025 announced partnerships with the UK government and AI Safety Institute to advance safe AI. Additionally, in Oct 2025 Google introduced new safety measures for teen users of its AI-enabled platforms (Instagram/Facebook were Meta's; Google's analogous efforts involve YouTube/Search AI adjustments) - reflecting attention to AI's societal impact on. No major ethical publicly reported events were reported for Google in Q4 2025, but the company continues to under close external scrutiny (., ongoing antitrust and content-moderation debates); Google is highly engaged with AI regulators worldwide. By July 2025, Google agreed to sign the EU's Code of Practice on AI - a voluntary pact to comply with the forthcoming EU AI Act for general-purpose. (In contrast, Meta refused, which drew.) Google's President of Global Affairs acknowledged improvements in the EU's code but continued to lobby on contentious points (copyright, trade secrets). As of Q4 2025, Google is preparing to implement the AI Act requirements by 2026-2027: this includes building features like content provenance (watermarking AI outputs), maintaining documentation for model transparency, and setting up compliance teams. Google also announced support for the. AI Safety Institute and joined the Frontier Model Forum in - aligning with industry self-regulation efforts. Furthermore, Google's DeepMind has partnerships with the UK government (deepening ties announced Dec 2025) to advise on AI governance and research safety. In terms of hard law, Google has faced antitrust and privacy fines in other domains, but no penalties yet specifically under AI legislation (since new laws aren't in effect). However, it's under close watch:., after earlier EU privacy complaints about AI data use, Google implemented an opt-out for web publishers from AI model training. In late 2025, reports surfaced that Google eased some internal AI restrictions to stay competitive amid fears of - potentially lobbying through actions. Overall, Alphabet is striving to meet regulatory expectations (signing voluntary codes, contributing to policy conversations) while cautioning against overly restrictive rules that could "chill". This dual stance is part of its alignment strategy; and publicly reported event: In early 2025, Google faced criticism for relaxing its AI ethics guidelines on military use, which some ethicists viewed as a retreat from principled. However, this did not result in an external publicly reported event per se, but it's a significant shift that stakeholders noted as a potential ethical risk (entrusting Google to self-police sensitive AI applications). Additionally, Google's large language model Bard saw minor publicly discussed concerns (., providing incorrect information), but no severe harm reported. Gap: One gap is that Google chose not to implement a fully external AI ethics advisory board, after an attempt in 2019 ended in disbandment - so oversight is entirely internal now, which some advocacy groups see as a transparency gap. Another gap may be in privacy of AI training data: a November 2025 report suggested Google began using more user data (emails) to train its models by, raising concerns about consent. Google's stance is that it anonymizes and secures data, but the balance between data use and privacy continues to a point of contention. Finally, Google's refusal (in July 2025) to place a moratorium on advanced AI development - unlike some calls for "AI pauses" - could be seen as a governance gap by those fearing unbridled AI progress. Google instead argues for responsible progress without halting. In summary, Google's AI governance is well-described (in public materials), but the company's sheer scale means any small gap is magnified; thus it faces ongoing pressure to demonstrate that its principles and processes are effective in practice.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 90.0 | 90.0 | 85.0 | 88.0 | 90.0 | 89.0 |
| 2025-04-01 | 90.0 | 85.0 | 90.0 | 90.0 | 90.0 | 89.0 |
| 2025-07-01 | 90.0 | 85.0 | 90.0 | 90.0 | 90.0 | 89.0 |
| 2025-10-01 | 90.0 | 85.0 | 90.0 | 90.0 | 90.0 | 89.0 |
| 2026-01-01 | 96.0 | 82.0 | 91.0 | 84.0 | 94.0 | 89.0 |
NVIDIA (NVDA)
Moved up 1 position to #4 (from #5) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 88.4 versus 85.0 previously (+3.4). Score movement was primarily driven by Transparency (+6.0), Governance Structure (+6.0), and Stakeholder Engagement (+5.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included NVIDIA has articulated a comprehensive "Trustworthy AI" framework and continued to implement responsible AI measures through Q4 2025. While no single new policy was launched after Oct 1, 2025, NVIDIA's existing initiatives gained momentum: it reinforced commitments to AI transparency, safety, bias mitigation, and alignment with global AI safety. NVIDIA's CEO Jensen Huang engaged with government officials in Dec 2025 to discuss AI governance (.,. chip export rules and AI regulations), signaling NVIDIA's involvement in policy dialogue. The company also expanded tools for responsible AI in its developer ecosystem, such as the Model Card++ generator for documenting models and NeMo Guardrails for LLM safety, which were promoted during its GTC 2025. No ethical publicly reported events were reported for NVIDIA itself; however, as a supplier, NVIDIA faces indirect scrutiny (., concerns about how its GPUs might be used for autonomous weapons or mass surveillance by others). NVIDIA addressed such issues by aligning with voluntary commitments (like the White House AI safety pledge) and emphasizing the importance of AI governance across the supply; NVIDIA actively aligns with evolving AI regulations and policies, often contributing to their development. As a chipmaker, NVIDIA faced export regulations in 2025 (. government restricting advanced GPU sales to certain countries for security reasons). Jensen Huang met. officials in Dec 2025 to discuss these rules and AI regulation - showing NVIDIA's engagement in policy-shaping. On AI ethics regulation: NVIDIA's principles line up with the. AI Bill of Rights (non-binding) and the EU AI Act's ethos (., emphasis on transparency, risk mitigation). NVIDIA likely prepared to comply with the EU AI Act especially in high-risk AI systems like autonomous vehicles - it provides documentation for its Drive platform to car OEMs for their regulatory filings. NVIDIA also joined the Partnership on AI and Global AI Governance efforts, aligning with multi-stakeholder guidelines. Importantly, NVIDIA publicly supports the White House AI Safety commitments (July 2023) - by end of 2025 it likely implemented those:., security testing of models, watermarking (SynthID usage), and sharing best. There have been no known regulatory penalties against NVIDIA for AI. Instead, regulators often collaborate with NVIDIA on safe AI (for instance, the UK's 2023 AI Safety Summit included industry labs like NVIDIA advising). Thus in Q4 2025, NVIDIA stands as a partner to regulators: it helps craft standards (co-developing IEEE/ISO AI standards, etc.) and ensures its products enable compliance (like releasing responsible AI toolkits to enterprise customers). One particular alignment: NVIDIA emphasizes "AI supply chain governance", mirroring language in the EU Act about providers' and deployers'. It provides mechanisms (like its AI workflow management and guardrails) such that companies using NVIDIA tech can more easily fulfill regulatory requirements for auditability and safety; and No direct ethical reported allegations of non-compliance are attributed to NVIDIA's own operations in late 2025. However, one ongoing concern (gap) is downstream usage of NVIDIA's products: its powerful chips enable AI by others who might not be responsible. For example, reports emerged of NVIDIA GPUs being used in surveillance systems in authoritarian regimes - a supply-chain ethics gap outside NVIDIA's immediate control. NVIDIA's approach is to work with governments on export controls (which it did, albeit expressing that overly broad bans could hurt innovation). Another gap is that, unlike cloud service providers, NVIDIA doesn't issue transparent AI impact reports because it's mainly a supplier; this limits public insight into how it ensures ethical use of its tech by clients. On publicly reported events: earlier in 2025, an NVIDIA research paper on AI agents raised hypothetical extreme risks (like multi-agent collusion); though not an publicly reported event, it highlights that NVIDIA is aware of potential future "AGI" issues but hasn't detailed governance for those publicly. Lastly, as AI energy use and carbon footprint rise (NVIDIA's GPUs consume significant power), some critics point to an ethical gap in sustainability. NVIDIA in 2025 ramped up cleantech investments to offset, but it's an emerging area of accountability. In conclusion, NVIDIA's own house has been kept in order ethically so far, and any gaps are mostly about indirect effects and ensuring its rapid growth in AI continues to coupled with relatively extensive (based on public disclosures) governance - something it appears to be addressing via broad principle-based programs.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 85.0 | 80.0 | 88.0 | 85.0 | 85.0 | 85.0 |
| 2025-04-01 | 85.0 | 80.0 | 88.0 | 85.0 | 85.0 | 85.0 |
| 2025-07-01 | 85.0 | 80.0 | 88.0 | 85.0 | 85.0 | 85.0 |
| 2025-10-01 | 86.0 | 81.0 | 88.0 | 85.0 | 85.0 | 85.0 |
| 2026-01-01 | 92.0 | 80.0 | 94.0 | 86.0 | 90.0 | 88.0 |
Verisk Analytics (VRSK)
Moved down 1 position to #5 (from #4) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 88.0 versus 86.4 previously (+1.6). Score movement was primarily driven by Regulatory Alignment (+5.0), Transparency (+2.0), and Ethical Principles (-1.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included No material public disclosures on AI ethics or governance by Verisk were identified since 01 Oct 2025. Verisk has an established set of ethical AI principles (branded "FAITH" - Fairness, Accountability, Inclusion, Transparency, and Humanity/Privacy) which predate Q4, and the company embeds these in its AI-driven products (. insurance underwriting AI). However, between October and December 2025, Verisk did not announce any new AI ethics policies, governance changes, or notable publicly reported events. It is continuing ongoing initiatives like developing industry AI codes of conduct and responsible AI product features, as detailed below, but without new public updates in this quarter; As an analytics provider in highly regulated sectors (insurance, finance), Verisk aligns its AI development with applicable regulations. By Q4 2025, key relevant laws include anti-discrimination laws in insurance (.,. FTC Act, EU regulations) and emerging AI regulations. Verisk's tools undergo regulatory approval or validation in some cases: notably, Verisk has topped the. FDA's list of authorized AI-based medical devices for four years running (as of mid-2025, with 100 AI clearances), which indicates relatively extensive (based on public disclosures) compliance for its healthcare AI solutions. In insurance, Verisk was actively engaging with UK regulators via the AI code (aligned with UK AI guidelines for financial services). Also, given EU AI Act imminent, Verisk's European operations likely began readiness assessments (though not publicized). Verisk has not faced any known regulatory sanctions for AI. Instead, it seems to be proactively self-regulating through industry standards (the insurance AI Code of Conduct is an example: effectively a pre-emptive self-regulatory alignment with ethical ). In summary, even though Verisk did not announce anything new after Oct 1, 2025, it can be inferred the company quietly ensures its AI solutions meet legal standards for fairness and transparency in credit/insurance contexts, and it contributes to shaping those standards; and No publicly reported events of AI misuse or scandals are recorded for Verisk in this period. Gaps: Verisk's low public profile on AI governance could itself be a gap - compared to larger tech firms, Verisk provides fewer public details on its AI oversight. This might signal a transparency gap, though it may simply reflect its B2B focus. Another gap is that as AI regulations like the EU AI Act come into force, Verisk will need to ensure even more rigorous documentation and potential third-party audits of its models; no public info indicates how prepared Verisk is (though internal readiness is likely). Lastly, while the insurance AI Code is a positive step, it's voluntary - a gap continues to until regulators codify those principles. If Verisk's clients or data are global, varying standards might pose compliance challenges. In absence of specific issues, the main "publicly reported event" to note is positive: Verisk's AI was cited for enabling efficient underwriting, but also carefully kept within ethical. The company appears to avoid cutting ethical corners perhaps to prevent publicly reported events in the first place.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 85.0 | 90.0 | 90.0 | 85.0 | 80.0 | 86.0 |
| 2025-04-01 | 85.0 | 90.0 | 90.0 | 85.0 | 80.0 | 86.0 |
| 2025-07-01 | 85.0 | 90.0 | 90.0 | 85.0 | 80.0 | 86.0 |
| 2025-10-01 | 86.0 | 91.0 | 90.0 | 85.0 | 80.0 | 86.0 |
| 2026-01-01 | 88.0 | 90.0 | 91.0 | 90.0 | 81.0 | 88.0 |
Cognizant (CTSH)
Held steady at #6 versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 85.4 versus 84.2 previously (+1.2). Score movement was primarily driven by Stakeholder Engagement (+5.0), Ethical Principles (-3.0), and Regulatory Alignment (+3.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Cognizant has been steadily executing its Responsible AI strategy, though it did not publicly announce new governance policies in Q4 2025. It continues to focused on helping clients deploy AI ethically and preparing for regulations like the EU AI Act. Earlier in 2025, Cognizant highlighted its Responsible AI framework and commitment to trustworthy AI in commentary around the EU AI. Post-Oct 1, 2025, Cognizant's activities included hosting discussions (., at Davos 2025 and its own Cognizant Discovery event) on AI ethics, and presumably continuing internal training on AI ethics for its consultants. No specific publicly reported events or policy changes were reported in late 2025. In summary, Cognizant is maintaining its role as an advisor on AI governance (to clients and governments) rather than making splashy new public pledges; Cognizant is strongly focused on regulatory alignment, given its clients span regulated industries. Notably, Cognizant has been preparing for the EU AI Act - in July 2024, Cognizant publicly wrote about how it is addressing the Act's requirements (transparency, human oversight, etc.) in. By late 2025, with the Act nearing final approval, Cognizant presumably audited its AI solutions portfolio to classify them by risk and ensure compliance measures (documentation, bias testing, etc.) are in place. Cognizant also adheres to EEOC and OFCCP guidelines in the US for AI in hiring. The bias lawsuit (described below) underlines that alignment: Cognizant and HiredScore claim their AI "meets OFCCP/EEO/EU regulations" for equitable. Additionally, Cognizant joined industry responses to policy: it engages with the US Chamber of Commerce AI Working Group and similar bodies to shape practical guidelines. In terms of standards, Cognizant references using EU Ethics Guidelines for Trustworthy AI as a. It also likely follows ISO guidelines on AI risk management. No regulatory censures of Cognizant's AI work have surfaced; rather, regulators sometimes consult firms like Cognizant on implementation. One example: Cognizant's experts have been involved in drafting AI auditing frameworks (through ISO or professional groups) in 2025, aligning with future enforcement schemes. Summarily, Cognizant in Q4 2025 is fully in tune with the regulatory landscape - building compliance into its services (as a selling point) and actively advising on policy; and publicly reported event: In October 2025, a. federal court certified a collective lawsuit alleging bias in an AI-powered hiring tool, which named Workday (a Cognizant partner) as a. While Cognizant is not directly sued, this case shines light on AI bias in HR systems - relevant because Cognizant helps implement similar systems. It underscores the importance of Cognizant's bias mitigation efforts. Cognizant's gap could be perceived if any of its delivered solutions had similar issues; thus far, Cognizant asserts its solutions are. Gap: There is a potential communication gap - Cognizant's AI ethics leadership is less public-facing than peers (., no standalone Responsible AI report). This might obscure its good work from public view. Another gap/challenge is operationalizing principles at scale - Cognizant has many delivery teams worldwide, so ensuring every project team follows the AI ethics checklists rigorously is an ongoing effort. Lastly, Cognizant is reliant on third-party AI tools (from big tech) in many projects; a gap can arise if those tools have hidden biases or privacy issues beyond Cognizant's immediate control. The company mitigates this by vetting partners and "focusing on identifiable risks". No clients publicly reported AI ethics issues with Cognizant's work in Q4, which suggests its governance is reasonably effective. Overall, the bias lawsuit in the HR sector stands out as a warning sign - one Cognizant stated and is addressing through enhanced bias testing and human oversight in such.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 85.0 | 90.0 | 90.0 | 80.0 | 75.0 | 84.0 |
| 2025-04-01 | 85.0 | 90.0 | 90.0 | 80.0 | 75.0 | 84.0 |
| 2025-07-01 | 85.0 | 90.0 | 90.0 | 80.0 | 75.0 | 84.0 |
| 2025-10-01 | 85.0 | 90.0 | 90.0 | 80.0 | 76.0 | 84.0 |
| 2026-01-01 | 87.0 | 87.0 | 89.0 | 83.0 | 81.0 | 85.0 |
Adobe (ADBE)
Held steady at #7 versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 84.2 versus 82.6 previously (+1.6). Score movement was primarily driven by Stakeholder Engagement (+6.0), Ethical Principles (-5.0), and Transparency (+4.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Adobe has continued to position itself as a leader in ethical generative AI, though it introduced no new major policies after Oct 1, 2025. It maintained its well-described (in public materials) AI ethics principles (accountability, responsibility, transparency) throughout the rollout of its Firefly generative AI. In late 2025, Adobe focused on Content Credentials expansion - ensuring that AI-generated images and media are labeled with provenance. This aligns with industry standards and likely saw further integration in Q4 (., by more publishers adopting Adobe's Content Authenticity Initiative standards). Adobe also enforced its Generative AI usage guidelines (published mid-2025) for users, which include prohibitions on misuse (no illegal or harmful content generation). No notable publicly reported events (such as misuse of Firefly or data privacy complaints) were reported in Q4. In summary, Adobe's governance of AI remained proactive and stable, continuing earlier initiatives with broad stakeholder support; Adobe is aligned with regulatory efforts, often ahead of them. For instance, many principles of the EU AI Act (transparency obligations, bias avoidance, user disclosure) are met by Adobe's practices like content credentials and thorough testing. Adobe was also part of the. voluntary AI commitments (July 2023) - by late 2025, it delivered on those promises by implementing watermarking and external red-teaming of its models. Additionally, Adobe actively participates in policy discussions: it provided input to the. National Institute of Standards and Technology (NIST) on AI risk management and to EU bodies on AI in media. Specific to copyright, which is quasi-regulatory, Adobe's approach to training data (using only licensed or public data for Firefly) set a de facto standard that aligns with anticipated legal norms around data. In October 2025, the UK hosted a global AI Safety Summit - while Adobe was not a core "frontier model" company like OpenAI, it voiced support for the summit's goals and likely was represented via industry groups. Importantly, Adobe's C2PA work positions it to comply with any future laws requiring AI-generated content disclosure. If the EU or US mandate content provenance, Adobe's system already provides it. Adobe has faced minimal regulatory scrutiny directly; one could note that in 2024, a class-action lawsuit was filed against generative AI image companies (not Adobe) for copyright - Adobe avoided such entanglement by aligning with IP law from the start. Thus by Q4 2025, Adobe is seen as a model of self-regulation in harmony with emerging law; and Adobe has largely avoided negative publicly reported events in AI. One could consider as an "publicly reported event" the creative community's concerns about AI art. Adobe faced initial backlash in early 2023 (artists worried Firefly would replace jobs or was trained on their art), but by late 2025 this had calmed, partly due to Adobe's ethical approach. Adobe's pledge not to use user content without consent and to compensate contributors for stock images used in training helped alleviate tensions - turning a potential publicly reported event into an example of ethical practice. Gaps: There are minor gaps to watch: Adobe's AI bias mitigation efforts haven't been very public. For instance, does Firefly perform equally well across different demographic image prompts? Adobe likely tests for this, but it hasn't published a fairness audit. Another gap/area for improvement is user education - while Adobe provides tutorials, some users still may not understand Content Credentials or may strip them off (although tampering is detectable). Ensuring universal awareness of these features continues to a challenge. Also, Adobe's enterprise clients might integrate its AI into workflows that Adobe can't fully oversee - ensuring those clients uphold Adobe's ethics (., using Content Credentials rather than disabling them) is an ongoing effort. However, these gaps are relatively small. There were no regulatory or safety lapses for Adobe's AI in 2025, and feedback from stakeholders has been generally positive. Thus Adobe's AI governance appears comprehensive, with any gaps being actively managed or anticipated.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 88.0 | 90.0 | 85.0 | 80.0 | 70.0 | 83.0 |
| 2025-04-01 | 88.0 | 90.0 | 85.0 | 80.0 | 70.0 | 83.0 |
| 2025-07-01 | 88.0 | 90.0 | 85.0 | 80.0 | 70.0 | 83.0 |
| 2025-10-01 | 88.0 | 90.0 | 85.0 | 80.0 | 70.0 | 83.0 |
| 2026-01-01 | 92.0 | 85.0 | 85.0 | 83.0 | 76.0 | 84.0 |
GE HealthCare (GEHC)
Held steady at #8 versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 84.0 versus 81.4 previously (+2.6). Score movement was primarily driven by Stakeholder Engagement (+5.0), Regulatory Alignment (+4.0), and Transparency (+2.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments were limited in public disclosures.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 80.0 | 85.0 | 80.0 | 90.0 | 70.0 | 81.0 |
| 2025-04-01 | 80.0 | 85.0 | 80.0 | 90.0 | 70.0 | 81.0 |
| 2025-07-01 | 80.0 | 85.0 | 80.0 | 90.0 | 70.0 | 81.0 |
| 2025-10-01 | 81.0 | 85.0 | 80.0 | 91.0 | 70.0 | 81.0 |
| 2026-01-01 | 83.0 | 87.0 | 80.0 | 95.0 | 75.0 | 84.0 |
Intel (INTC)
Held steady at #9 versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 83.8 versus 80.4 previously (+3.4). Score movement was primarily driven by Transparency (+6.0), Stakeholder Engagement (+5.0), and Regulatory Alignment (+4.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Post-October 2025, Intel's AI governance focus has been on ethical AI enablement and education. In Oct 2025, Intel pledged support for the White House's AI Education initiative, launching an "AI-Ready School" program in 250. schools to foster responsible AI. Intel also held its annual AI Global Impact Festival (Oct 2025), emphasizing responsible AI innovation among youth; notably, student AI projects were subject to ethics audits by Intel. Internally, Intel continues to embed AI ethics in its culture (. linking AI development to its human rights principles) and align with government AI policies. No major new AI ethics policies were announced late 2025, but Intel sustained its existing responsible AI practices through community engagement and adherence to its prior commitments (such as avoiding uses of its products that violate human ); and No major AI ethics scandals were reported for Intel in the period since Oct 1, 2025. Intel's AI efforts have largely been positive (education, youth programs). One possible gap is that Intel has not publicly updated its AI ethical principles in recent years - its prior Responsible AI principles (established around 2019) are assumed to still apply, but no new principle framework was published in late 2025. Additionally, while Intel champions responsible AI externally, it is primarily a component supplier; thus, ensuring the ethical use of its chips by customers (. preventing misuse in surveillance or autonomous weapons) continues to an ongoing challenge. Intel addresses this via its human rights policy (no adverse use of products), but enforcement relies on customers' cooperation. Overall, no "gaps" in governance were evident in Q4 2025, aside from the continuing need for transparency on how Intel audits the end-use of its AI technologies (an area not deeply detailed publicly).
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 80.0 | 82.0 | 80.0 | 85.0 | 75.0 | 80.0 |
| 2025-04-01 | 80.0 | 82.0 | 80.0 | 85.0 | 75.0 | 80.0 |
| 2025-07-01 | 80.0 | 82.0 | 80.0 | 85.0 | 75.0 | 80.0 |
| 2025-10-01 | 80.0 | 82.0 | 80.0 | 85.0 | 75.0 | 80.0 |
| 2026-01-01 | 86.0 | 83.0 | 81.0 | 89.0 | 80.0 | 84.0 |
NXP Semiconductors (NXPI)
Moved up 1 position to #10 (from #11) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 83.4 versus 80.0 previously (+3.4). Score movement was primarily driven by Governance Structure (+6.0), Regulatory Alignment (+6.0), and Stakeholder Engagement (+6.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included From Oct 2025 to present, NXP has maintained its relatively extensive (based on public disclosures) stance on AI ethics, although the foundational elements were established earlier. NXP's key development was achieving a mature AI ethics framework that continues to guide its products. By late 2025, NXP highlights its five-principle AI ethics framework (non-maleficence, human autonomy, explainability, vigilance, privacy-by-design) in public and integrates these into product design and partnerships. In Oct 2025, NXP's focus was on promoting "responsible AI at the edge" through documentation and compliance. Notably, in Q3 2025 NXP's earnings report and marketing emphasized responsible AI as a differentiator, and the company has been aligning with upcoming regulations (like the EU AI Act) given its presence in Europe. While NXP did not publicly announce new ethics governance bodies post-Oct, it had already in prior years launched an internal AI Ethics initiative and council (back in 2020), which persist. No publicly reported events of AI misuse have surfaced; instead, NXP in late 2025 celebrated being at the forefront of AI ethics among chipmakers; and No notable publicly reported events of unethical AI use were linked to NXP in this timeframe. NXP's focus on edge AI (in vehicles, appliances, etc.) did not result in publicized harm or publicly discussed concerns. One gap might be that NXP's AI ethics initiative is relatively older (launched 2020), and while the company continues to champion it, there were no fresh public updates post-2020 except integration into current. This could imply the program is stable but perhaps not widely publicized recently. However, the presence of NXP's principles in current indicates the framework is active. Another potential gap is that being a component supplier, NXP doesn't control end-user implementations of its chips; if a customer misused NXP's AI capability in a harmful way, it could reflect back on NXP. So far, no such cases have emerged, and NXP's strategy of partnering in usage (and requiring human-centric design) mitigates this. In summary, no material governance failures or publicly reported events since Oct 2025, and NXP's program appears well-described (in public materials).
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 80.0 | 85.0 | 80.0 | 75.0 | 80.0 | 80.0 |
| 2025-04-01 | 80.0 | 85.0 | 80.0 | 75.0 | 80.0 | 80.0 |
| 2025-07-01 | 80.0 | 85.0 | 80.0 | 75.0 | 80.0 | 80.0 |
| 2025-10-01 | 80.0 | 85.0 | 80.0 | 75.0 | 80.0 | 80.0 |
| 2026-01-01 | 81.0 | 83.0 | 86.0 | 81.0 | 86.0 | 83.0 |
Fiserv (FISV)
Moved down 1 position to #11 (from #10) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 80.2 versus 80.0 previously (+0.2). Score movement was primarily driven by Stakeholder Engagement (+3.0), Ethical Principles (-2.0), and Governance Structure (-2.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Since 1 Oct 2025, Fiserv has continued to strengthen its internal AI governance program rather than making high-profile public pledges. The company's Responsible AI Governance program, established earlier in 2024, continues to in effect, overseen by a cross-functional team spanning business, legal, compliance, and. In late 2025, Fiserv has focused on applying AI internally (for efficiency and customer experience) under careful oversight. For example, a Nov 2025 report noted Fiserv's "Project Elevate" to embed AI company-wide to improve operations, with plans to detail its AI ROI to investors - indicating a commitment to transparency with stakeholders on AI. While Fiserv did not issue new AI ethics policies publicly in Q4 2025, it did highlight its existing responsible AI training and guidelines for employees (to ensure any AI usage abides by ethical and regulatory standards). No known ethical publicly reported events were reported; instead, Fiserv faced general business challenges (. leadership changes and financial guidance issues) and is leveraging AI (under governance oversight) to address those challenges; and No specific AI ethics publicly reported events have come to light for Fiserv since Oct 2025. However, Fiserv did face broader scrutiny in Q4 2025 related to financial performance disclosures - including a class action (late Oct) alleging misrepresentation of business. While not an AI ethics issue, this underscores the importance for Fiserv to maintain trust. On the AI front, one gap is that Fiserv has not made public, detailed ethical AI principles or joined high-profile AI ethics pledges. This could be seen as a limited publicly described external transparency compared to peers. Nonetheless, internally the pieces (governance council, policies, training) are in. Another notable point: Fiserv's aggressive adoption of AI ("embedding AI in everything we do" per ) must be matched with equally well-described (in public materials) ethics oversight - Fiserv has such oversight, but the effectiveness will be proven over time. So far, there have been no reports of AI bias or consumer harm related to Fiserv's AI, indicating no news is good news. Overall, no material gaps in responsible AI governance are evident, aside from the quieter public profile of Fiserv's AI ethics compared to some competitors.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 70.0 | 85.0 | 90.0 | 80.0 | 75.0 | 80.0 |
| 2025-04-01 | 70.0 | 85.0 | 90.0 | 80.0 | 75.0 | 80.0 |
| 2025-07-01 | 70.0 | 85.0 | 90.0 | 80.0 | 75.0 | 80.0 |
| 2025-10-01 | 70.0 | 85.0 | 90.0 | 80.0 | 75.0 | 80.0 |
| 2026-01-01 | 71.0 | 83.0 | 88.0 | 81.0 | 78.0 | 80.0 |
Autodesk (ADSK)
Moved up 3 positions to #12 (from #15) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 80.0 versus 76.0 previously (+4.0). Score movement was primarily driven by Transparency (+6.0), Governance Structure (+6.0), and Regulatory Alignment (+6.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included From October 2025 to present, Autodesk's major advancement in AI governance was obtaining a first-of-its-kind certification for its AI management system. In August 2025, Autodesk announced it achieved ISO/IEC 42001 certification for responsible AI governance - joining a select few companies meeting this international AI management. This certification, earned for Autodesk's AI/ML development platform (AMP), validates that Autodesk has rigorous controls, risk management, and oversight throughout its AI. Post-certification, Autodesk continued to embed its "Trusted AI" program principles (responsibility, transparency, accountability, reliability, safety & security) in product. Autodesk also actively aligned with regulatory and industry initiatives: notably, it was an early signatory of the EU AI Pact (Sept 2024) pledging to adopt AI Act principles ahead of, and in late 2025, it remained engaged in global policy discussions (Autodesk participates in NIST's AI initiatives and content authenticity ). No ethical publicly reported events involving Autodesk's AI were reported; instead, the company has been highlighted as a leader in responsible AI, advocating for industry best practices (. publishing a "Trusted AI white paper" and guiding customers in ethical AI adoption in ). Overall, Autodesk's developments show a concrete strengthening of governance (via certification) and continued proactive stance on aligning with forthcoming regulations and ethical standards; and No public publicly reported events of unethical AI behavior are associated with Autodesk. Its proactive measures seem to have prevented negative outcomes so far. Autodesk's AI, which often assists professionals, hasn't been known to cause harm or major publicly discussed concern. One could point out a gap in that Autodesk's AI ethics focus is mostly internal and B2B - there's less external pressure or independent scrutiny compared to consumer AI. However, Autodesk addressed this by seeking ISO certification (bringing in independent auditors). Another gap might be that as AI gets more autonomous in design, ensuring liability is clear could be challenging (if an AI-generated design had a flaw, who's accountable?). Autodesk's governance addresses this by human-in-loop and clear customer terms, but it's an evolving area to watch. Also, while Autodesk engages in content authenticity (to combat deepfakes in media), one could consider whether similar efforts are in place to ensure AI-generated designs don't inadvertently reinforce biases (like only suggesting certain culturally homogeneous design elements). Autodesk's bias testing and diversity of datasets aim to cover, and no issues have surfaced, so this seems under control. In essence, no notable gaps are evident; Autodesk's approach is quite comprehensive. The main challenge will be maintaining this high standard as AI capabilities expand. But having "just getting started" in its ISO indicates Autodesk is aware governance is a continuous journey.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 75.0 | 80.0 | 80.0 | 75.0 | 70.0 | 76.0 |
| 2025-04-01 | 75.0 | 80.0 | 80.0 | 75.0 | 70.0 | 76.0 |
| 2025-07-01 | 75.0 | 80.0 | 80.0 | 75.0 | 70.0 | 76.0 |
| 2025-10-01 | 75.0 | 80.0 | 80.0 | 75.0 | 70.0 | 76.0 |
| 2026-01-01 | 81.0 | 77.0 | 86.0 | 81.0 | 75.0 | 80.0 |
Intuit (INTU)
Moved up 1 position to #13 (from #14) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 80.0 versus 77.6 previously (+2.4). Score movement was primarily driven by Regulatory Alignment (+4.0), Transparency (+2.0), and Ethical Principles (+2.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Post-Oct 2025, Intuit has continued to mature its comprehensive Responsible AI program that was launched prior. The company's five-pillar Responsible AI principles - "Powering prosperity, Enhancing human talent, Fairness, Accountability, Transparency, Privacy & security" - remain the foundation for its AI. In late 2025, Intuit notably expanded its Responsible AI governance team and training, embedding RAI "champions" across business units and rolling out an internal AI risk evaluation tool for new. Intuit's board-level oversight of AI also solidified: the Audit & Risk Committee of the Board now explicitly oversees the Responsible AI. Intuit aligned its AI governance with external frameworks - for example, it publicly states alignment with the NIST AI Risk Management and has advocated for responsible AI in industry forums (Intuit's Chief Data Officer has blogged about ethical AI best practices). No significant AI-related publicly reported events occurred; Intuit has proactively managed risks (. adding human review for high-stakes AI outputs like loan ). In summary, Intuit's developments in this period were about operationalizing and scaling its well-described (in public materials) AI governance (. more training, formalized reviews), rather than announcing new policies, since the framework was already in place by 2024; and Intuit has had no known public AI ethics publicly reported events or publicly discussed concerns since Oct 2025. Its careful approach appears to be effective in preventing issues like biased outcomes or customer harm. One gap that could be noted is that Intuit's products do rely on AI for important decisions (like credit offers via Credit Karma). Any error or bias in those could significantly impact individuals. While no scandal has arisen, the stakes remain high, meaning Intuit must continuously guard against subtle biases in credit or tax domains. Intuit stated this risk by treating those use cases with special care (heightened review, human oversight). Another potential gap is simply vigilance as the company rapidly scales AI (Intuit introduced an AI assistant across its product suite in 2023-2024). Ensuring consistent governance as AI scales is challenging - however, Intuit's expanding RAI team and champion network directly address that. Finally, Intuit is very advanced in governance, so gaps are few; one could argue Intuit might be slightly less vocal publicly about its responsible AI efforts compared to how relatively extensive (based on public disclosures) they are internally. For example, while Intuit has a Responsible AI page and has shared a whitepaper ("Empowering Innovation through Responsible AI Governance" in 2023), it could publish more frequent updates. But this is a minor observation. In practice, no notable publicly reported events or governance failures have occurred - Intuit's program seems to be a model of good practice in the industry.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 75.0 | 78.0 | 70.0 | 75.0 | 90.0 | 78.0 |
| 2025-04-01 | 75.0 | 78.0 | 70.0 | 75.0 | 90.0 | 78.0 |
| 2025-07-01 | 75.0 | 78.0 | 70.0 | 75.0 | 90.0 | 78.0 |
| 2025-10-01 | 75.0 | 78.0 | 70.0 | 75.0 | 90.0 | 78.0 |
| 2026-01-01 | 77.0 | 80.0 | 72.0 | 79.0 | 92.0 | 80.0 |
Apple (AAPL)
Moved down 1 position to #14 (from #13) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 78.0 versus 78.6 previously (-0.6). Score movement was primarily driven by Regulatory Alignment (-5.0), Transparency (+2.0), and Ethical Principles (-1.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Since Oct 2025, Apple has not announced new AI ethics policies; however, it has subtly reinforced its existing responsible AI approach through product updates and shareholder communications. In late 2025, Apple's advancements in AI (for example, improving its on-device foundation models for Siri in iOS 18) were "grounded in our Responsible AI approach with safeguards like content filtering and locale-specific evaluation," according to an Apple Machine Learning research. This indicates Apple's continued commitment to controlling AI outputs for safety and cultural sensitivity. Apple continues to focused on privacy-first AI - as evidenced by features that keep AI processing on device or within a "Private Compute Cloud" to protect user. A notable development in Apple's governance context was a shareholder proposal (voted in early 2025, results known in Q1 2025) calling on Apple to report on "Ethical AI Data Acquisition and Usage." While this predates Oct, Apple's board formally responded in its 2025 proxy (pre-October) opposing the need for a separate AI ethics report, asserting that its existing privacy and data use practices cover. During Oct-Dec 2025, Apple did update its software terms to restrict certain AI uses (for example, new App Store Review guidelines in late 2025 require apps using AI to have appropriate content filters). Overall, Apple's AI governance is embedded in its broader corporate ethos of user privacy, safety, and human rights, and that continued unchanged through late 2025, with no scandals or major policy shifts during this period; and No public publicly reported events of AI ethics failures have been attributed to Apple in the period since Oct 2025. Apple's AI continues to relatively conservative (. Siri is less prone to generating false or harmful content than other chatbots, partly due to limited capability - which can be seen as Apple prioritizing safety over capability). One identified gap is that Apple tends to avoid publishing a standalone AI ethics policy or report. The shareholder proposal in early 2025 highlighted this, suggesting Apple "improve disclosure" on AI. Apple's board countered that existing disclosures suffice; nevertheless, the absence of a dedicated AI ethics report is notable compared to some peers. Another gap: Apple's secretive culture means external stakeholders have to infer how Apple governs AI internally. This limited publicly described explicit transparency might concern some observers, but Apple banks on its track record (few scandals, relatively extensive (based on public disclosures) privacy stance) to assure stakeholders. The denied shareholder proposal and limited publicly described voluntary commitments show Apple prefers to let its actions speak for themselves. Finally, because Apple's AI is largely internal and assistive, it hasn't faced the same level of societal impact scrutiny (no filter bubble or election interference publicly discussed concerns tied to its AI). If anything, the gap is Apple's limited AI feature set to date - which some might argue lags in innovation due to its cautious approach. From an ethics view, that caution may actually be a strength (fewer publicly reported events). So, in summary, no material negative publicly reported events; the main gap is in external communication rather than in practice.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 80.0 | 85.0 | 75.0 | 82.0 | 70.0 | 78.0 |
| 2025-04-01 | 80.0 | 85.0 | 75.0 | 82.0 | 70.0 | 78.0 |
| 2025-07-01 | 80.0 | 85.0 | 75.0 | 82.0 | 70.0 | 78.0 |
| 2025-10-01 | 80.0 | 86.0 | 75.0 | 82.0 | 70.0 | 79.0 |
| 2026-01-01 | 82.0 | 85.0 | 75.0 | 77.0 | 71.0 | 78.0 |
Meta (Facebook) (META)
Moved down 3 positions to #15 (from #12) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 77.8 versus 78.8 previously (-1.0). Score movement was primarily driven by Regulatory Alignment (-6.0), Governance Structure (-5.0), and Stakeholder Engagement (+4.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Since October 2025, Meta's AI governance landscape has been marked by both external regulatory pressures and internal adjustments. In late 2025, Meta introduced new AI features (such as its Meta AI chatbot across platforms) and simultaneously encountered regulatory pushback in Europe over competitive fairness. In December 2025, the EU Commission opened an antitrust investigation into Meta's policy of blocking rival AI chatbots on WhatsApp, on concerns it unfairly advantaged Meta's own AI. This reflects a governance gap in considering broader "AI ecosystem" fairness. Internally, Meta has refined its AI content guidelines following an earlier publicly discussed concern (Aug 2025) where leaked internal documents showed lapses in AI chatbot content moderation (. allowing potentially inappropriate roleplay scenarios). Meta claimed those problematic guidelines were corrected. On the positive side, Meta continues to open source AI models and participate in global AI safety efforts (it was among companies that agreed at the UK AI Safety Summit in Nov 2023 to test frontier models with governments). Post-Oct 2025, Meta has not published new ethical AI principles, but it reiterates commitments to transparency, safety, and human rights in AI - for instance, updating its privacy policy (effective Oct 21, 2025) to clarify data usage in. Overall, Meta's AI governance is evolving under significant oversight, aiming to balance rapid innovation with ethical safeguards and regulatory compliance; and Meta had a significant governance lapse revealed in August 2025: internal AI guidelines approved content for chatbots that most would deem unacceptable (romantic responses to minors, hateful or false statements if labeled as fiction). This publicly reported event exposed a gap in Meta's internal oversight and judgment, suggesting that commercial or engagement objectives overrode ethical considerations. Meta acted to remove those "erroneous" guidelines after exposure, but the episode indicates the need for stronger internal ethical veto power. Another publicly reported event is the EU antitrust investigation (Dec 2025), which, while about competition, relates to Meta's governance of its platform and AI access. This is a gap in stakeholder fairness - regulators felt Meta didn't sufficiently consider competitors (and by extension consumer choice). On a broader note, Meta's refusal of the EU's GPAI Code (July 2025) is viewed by some as a gap in Meta's willingness to voluntarily go above the legal minimum for AI. However, Meta insists it will meet actual laws and prefers global standards over region-specific voluntary. No new public-facing AI principles were issued by Meta in late 2025, which could be seen as a transparency gap, though Meta likely operates under its existing responsible AI tenets internally. In summary, Meta's gaps: (1) Content moderation governance - needed tightening and more external input, (2) Platform openness - under scrutiny by EU, (3) Communication - has room to improve proactive ethical disclosures. These gaps are counterbalanced by swift remediation efforts and continued external engagement, but they do underline the challenges Meta faces in governing AI at its scale.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 78.0 | 80.0 | 75.0 | 80.0 | 82.0 | 79.0 |
| 2025-04-01 | 78.0 | 80.0 | 75.0 | 80.0 | 82.0 | 79.0 |
| 2025-07-01 | 78.0 | 80.0 | 75.0 | 80.0 | 82.0 | 79.0 |
| 2025-10-01 | 78.0 | 80.0 | 75.0 | 79.0 | 82.0 | 79.0 |
| 2026-01-01 | 79.0 | 81.0 | 70.0 | 73.0 | 86.0 | 78.0 |
AMD (AMD)
Held steady at #16 versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 76.0 versus 73.0 previously (+3.0). Score movement was primarily driven by Regulatory Alignment (+6.0), Stakeholder Engagement (+5.0), and Ethical Principles (+2.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Since Oct 2025, AMD has bolstered its Responsible AI efforts primarily by formalizing internal policies and continuing external collaborations. In Feb 2025, AMD updated its Responsible AI Use Policy (publicly posted) which underscores AMD's commitment to ethical AI product. By late 2025, AMD highlights that it has a Responsible AI framework with guiding values (privacy, human focus, fairness, etc.) and an internal Responsible AI Council steering AI. AMD's developments in this period include aligning its program with global standards (AMD explicitly aligns with the NIST AI Risk Framework and is an active member of the Responsible AI ), and championing industry initiatives for AI transparency and security (like Confidential Computing for data privacy in ). Additionally, AMD launched major partnerships (. with Hugging Face and others on an open AI ) to promote ethical innovation. There have been no negative publicly reported events involving AMD's AI; however, AMD did make headlines in 2025 for AI business moves (like partnering with OpenAI on AI chips), which underscored the importance of its Responsible AI guidelines to prevent misuse of its powerful hardware. Overall, AMD's Q4 2025 posture is a continuation of its proactive strategy established earlier: emphasizing energy-efficient, fair, and secure AI and providing guidance to ensure its technology is used for good; and AMD has not been associated with negative AI ethics publicly reported events. There is no public case of AMD technology causing ethical harm - being a component supplier, AMD is one step removed from direct AI decisions. One potential gap is ensuring downstream users abide by AMD's Responsible AI use policy. While AMD can encourage and provide tools (and refuse some deals,. not selling the most advanced AI chips to certain entities under export restrictions), ultimately AMD cannot fully control how every customer uses its chips. This is a structural gap that AMD manages by engagement and policy, but it's not foolproof. Another gap could be that AMD's excellent principles might not be widely known outside tech circles -. public perception of AI ethics often focuses on service companies, not chipmakers. AMD could possibly increase public reporting on how its products are used responsibly (though it does via case studies). On energy, AMD set a big goal (30x energy efficiency gain for AI/HPC by 2025) and was on - if that goal wasn't fully met, it might be a gap in sustainability achievement (though they achieved by latest, which is progress). This is a technical gap, not an ethical failing, and AMD being transparent about progress mitigates it. In summary, no ethical lapses; only challenge is the influence gap - AMD must rely on partners to share its ethics commitment, but it's actively working on that through collaboration.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 65.0 | 70.0 | 75.0 | 80.0 | 75.0 | 73.0 |
| 2025-04-01 | 65.0 | 70.0 | 75.0 | 80.0 | 75.0 | 73.0 |
| 2025-07-01 | 65.0 | 70.0 | 75.0 | 80.0 | 75.0 | 73.0 |
| 2025-10-01 | 65.0 | 70.0 | 75.0 | 80.0 | 75.0 | 73.0 |
| 2026-01-01 | 66.0 | 72.0 | 76.0 | 86.0 | 80.0 | 76.0 |
Arm Holdings (ARM)
Moved up 1 position to #17 (from #18) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 74.6 versus 72.0 previously (+2.6). Score movement was primarily driven by Stakeholder Engagement (+5.0), Ethical Principles (+3.0), and Regulatory Alignment (+3.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Arm engages with regulatory trends indirectly. Its NeurIPS messaging underscored "stability as the foundation of AI innovation", aligning with calls for safer. Arm's CEO participated in international AI initiatives (. UK AI Safety Summit 2023 and Japan ), demonstrating awareness of emerging regulations. Arm is also a supplier of AI-enabling chips, so it tracks policies like export controls and the EU AI Act. Arm's platform is positioned to help partners comply with upcoming AI laws by focusing on on-device AI (privacy) and secure AI hardware. (No specific statement on EU AI Act since Oct 2025, but Arm's open-source approach and standards support indicate alignment with global AI governance frameworks.); and No negative publicly reported events (. misuse of Arm's AI tech) have been reported in the period. A possible gap is the limited publicly described a public AI ethics manifesto - Arm's ethical stance is evident but not consolidated in a public document. However, Arm did take a notable positive step earlier in 2025 by signing the Rome Call for AI Ethics, committing to principles like transparency, inclusion, impartiality, and. (This occurred in June 2025, slightly before the cutoff, but underscores Arm's ethics alignment.) Overall, no new material disclosures since Oct suggest Arm's AI governance is steady-state, focused on integrating ethics into product development rather than new policies.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 50.0 | 80.0 | 70.0 | 80.0 | 80.0 | 72.0 |
| 2025-04-01 | 50.0 | 80.0 | 70.0 | 80.0 | 80.0 | 72.0 |
| 2025-07-01 | 50.0 | 80.0 | 70.0 | 80.0 | 80.0 | 72.0 |
| 2025-10-01 | 50.0 | 80.0 | 70.0 | 80.0 | 80.0 | 72.0 |
| 2026-01-01 | 50.0 | 83.0 | 72.0 | 83.0 | 85.0 | 75.0 |
Qualcomm (QCOM)
Moved up 2 positions to #18 (from #20) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 74.2 versus 70.0 previously (+4.2). Score movement was primarily driven by Ethical Principles (+6.0), Transparency (+5.0), and Stakeholder Engagement (+5.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Qualcomm is actively aligning with global AI governance initiatives. It publicly endorsed the Rome Call for AI Ethics in 2025, which complements global regulatory principles and indicates support for international ethical. Qualcomm has also engaged with policymakers: for instance, it voiced support for the G7 Hiroshima AI process and is aware of the EU AI Act's implications (as a major chip provider for AI, it monitors compliance needs for AI-enabled devices). The company's Responsible AI lead has likely contributed input to regulatory consultations (Qualcomm often files comments on tech policy). Moreover, Qualcomm's AI solutions are marketed as "AI compliance ready" - focusing on on-device AI that can help customers meet privacy regulations. In late 2025, Qualcomm joined industry and government forums (., it was named to TIME100 AI, implying engagement with AI policy ). No misalignment issues have surfaced; instead, Qualcomm tends to advocate for balanced AI regulations that promote innovation while guarding ethics. Its general counsel and policy teams have emphasized early compliance builds trust (as noted in an Atlassian conference summary). Overall, Qualcomm appears pro-regulation in AI, pledging to follow frameworks like the AI Bill of Rights (US) or global AI pacts; and Qualcomm has had no known AI ethics publicly discussed concerns in this period. Instead, it achieved a positive milestone by being one of the first companies to sign onto a global AI ethics pledge (Rome Call). A gap might be that Qualcomm's AI ethics efforts are less publicized than those of some peers (no dedicated responsible AI report yet). However, the company's earlier commitment to publish an AI transparency report (if any, per investor engagements) would address this. Also, while Qualcomm has well-described (in public materials) principles, it operates largely as a B2B tech enabler - meaning it must rely on partners to implement AI ethically on devices using Qualcomm chips. This raises a possible gap: ensuring downstream use of its technology aligns with its principles. Qualcomm partially addresses this by expecting suppliers and partners to follow similar AI ethics. Another notable item: Qualcomm's board changes (adding an AI expert) could be seen as a response to stakeholder expectations that AI get board-level attention - which it did,. Overall, no publicly reported events occurred, and Qualcomm's "gaps" are more about transparency in communication, which it is actively improving.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 70.0 | 75.0 | 70.0 | 70.0 | 65.0 | 70.0 |
| 2025-04-01 | 70.0 | 75.0 | 70.0 | 70.0 | 65.0 | 70.0 |
| 2025-07-01 | 70.0 | 75.0 | 70.0 | 70.0 | 65.0 | 70.0 |
| 2025-10-01 | 70.0 | 75.0 | 70.0 | 70.0 | 65.0 | 70.0 |
| 2026-01-01 | 75.0 | 81.0 | 72.0 | 73.0 | 70.0 | 74.0 |
Western Digital (WDC)
Added to the index at the January 1, 2026 rebalance and ranked #19 based on publicly available information reviewed for Q1 2026. Entered the top 25 with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 71.8. Strengths are concentrated in Ethical Principles (81.0), Governance Structure (75.0), while the most material relative gap is Regulatory Alignment (63.0). Key Q4 2025 Artificial Intelligence governance and ethics developments included Western Digital has aligned itself prominently with ethical business and AI innovation. In March 2025 it was honored as one of Ethisphere's World's Most Ethical Companies, underscoring relatively extensive (based on public disclosures) governance and ethics. In November 2025 the company showcased new storage products designed for AI and HPC at Supercomputing. In October 2025 media reported WD's plan to invest $1 billion in Japanese R&D for next-gen HDDs to meet AI-driven. These actions demonstrate a strategic focus on supporting AI but do not reflect AI governance policies per se; Western Digital complies with global regulations (data privacy, export controls, environmental laws). It has not announced AI-specific regulatory strategies. WD has, however, mentioned aligning sustainability strategy with broad EU regulations (. supply chain due diligence) though not AI-specific; and The company did enact a price increase on HDDs due to unprecedented AI-driven demand (reported Oct ); this market action drew attention but was a business response rather than an ethics publicly reported event. A gap is that WD has not publicized any formal "responsible AI" policy, even as it positions itself as an AI infrastructure leader.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2026-01-01 | 67.0 | 81.0 | 75.0 | 63.0 | 73.0 | 72.0 |
Only the latest rebalance is available. Historical scores will appear as new rebalances are added.
Honeywell (HON)
Moved up 1 position to #20 (from #21) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 71.6 versus 69.0 previously (+2.6). Score movement was primarily driven by Stakeholder Engagement (+5.0), Regulatory Alignment (+4.0), and Governance Structure (+2.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Honeywell appears keenly attuned to the regulatory landscape for AI, especially given its global industrial operations. Its internal guidance explicitly notes that rapid AI developments require companies "and regulators [to] work together" to meet new. Honeywell's governance framework was designed in anticipation of laws like the EU AI Act - for example, the emphasis on documentation, risk assessment, and human oversight in Honeywell's Responsible AI Policy mirrors requirements in the EU. The company's Privacy Office monitors global AI regulations and ensures compliance with data protection laws in AI. Honeywell is also an active participant in policy discussions: it contributes to industry bodies (like the. Chamber of Commerce AI Working Group and forums under NIST/ISO for AI standards). In 2025, Honeywell's leaders publicly advocated for human-centric AI laws in high-stakes fields (healthcare, critical infrastructure), consistent with its principles. There is no indication Honeywell has faced any regulatory non-compliance; on the contrary, two of its smart factories were recognized by the World Economic Forum for advanced (and presumably compliant) AI. The company also ensures AI is aligned with safety regulations -., AI controlling physical processes goes through rigorous safety certification. Overall, Honeywell is proactively aligning its AI governance with evolving laws to stay ahead of compliance needs and to shape sensible regulations through industry engagement; and Honeywell has not been associated with any AI-related scandals or public ethical failures. There were no known publicly reported events of AI misuse (., no reports of biased algorithms or unsafe AI operations) emerging since Oct 2025. A notable positive outcome is Honeywell's Global Impact Award in 2025 for AI innovation, which implicitly validated its ethical approach. One potential gap might be public communication: Honeywell's well-described (in public materials) internal framework is clear, but it has not issued a dedicated external AI ethics report. Stakeholders must glean its practices from broader ESG. However, given that investors withdrew a potential shareholder proposal on AI (similar proposals at tech companies were withdrawn after commitments), Honeywell likely satisfied any calls for more transparency. Another gap could be the breadth of its Stakeholder Engagement pillar - while Honeywell excels in engaging industry and employees, it's less visible in civil society forums compared to pure tech firms. Lastly, as Honeywell adopts more AI in critical systems, it faces the ongoing challenge of continuously auditing for bias or error - a gap it stated by stating AI systems will be "monitored over time" for fairness and. In conclusion, there have been no material negative issues, and Honeywell's main gap is perhaps ensuring its rigorous internal standards are fully conveyed externally - something it is improving on.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 80.0 | 80.0 | 75.0 | 60.0 | 50.0 | 69.0 |
| 2025-04-01 | 80.0 | 80.0 | 75.0 | 60.0 | 50.0 | 69.0 |
| 2025-07-01 | 80.0 | 80.0 | 75.0 | 60.0 | 50.0 | 69.0 |
| 2025-10-01 | 80.0 | 80.0 | 75.0 | 60.0 | 50.0 | 69.0 |
| 2026-01-01 | 81.0 | 81.0 | 77.0 | 64.0 | 55.0 | 72.0 |
Palo Alto Networks (PANW)
Moved down 2 positions to #21 (from #19) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 71.6 versus 70.0 previously (+1.6). Score movement was primarily driven by Regulatory Alignment (+4.0), Stakeholder Engagement (+2.0), and Transparency (+1.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Palo Alto Networks is closely aligned with evolving AI regulations and standards. It explicitly references and supports frameworks like the. Executive Order on AI Security (June 2025), the White House AI Action Plan (July 2025) calling for "Secure-by-Design AI", and draft NIST AI security. The company's roadmap was developed to bridge the gap between high-level policy intent and actionable. Palo Alto also proactively influences regulation - for example, by sponsoring the MITRE ATLAS and contributing to OWASP, it helps define best practices regulators may adopt. Its policy blog encourages that AI security and governance become core government. In summary, Palo Alto is preparing customers for compliance with upcoming AI laws (like the EU AI Act) by embedding security and auditability features, and it actively engages with policymakers to shape pragmatic AI governance; and Palo Alto Networks had no public AI ethics scandals or regulatory penalties in this period. Notably, it pre-emptively addresses gaps: recognizing that many companies lack AI, it positioned its technology to fill that gap. One potential gap is that Palo Alto has not publicly issued a formal Responsible AI principles document, even as it clearly operates by such principles (security, fairness, etc.). However, the company's actions (product design and policy engagement) effectively stand in for an explicit principles list. No internal pushback is evident; Palo Alto's workforce presumably supports its mission to "turn security into an AI innovation accelerator". In summary, Palo Alto's record since Oct 2025 indicates proactive leadership with no major lapses - it is closing industry gaps rather than exhibiting them.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 60.0 | 70.0 | 65.0 | 80.0 | 75.0 | 70.0 |
| 2025-04-01 | 60.0 | 70.0 | 65.0 | 80.0 | 75.0 | 70.0 |
| 2025-07-01 | 60.0 | 70.0 | 65.0 | 80.0 | 75.0 | 70.0 |
| 2025-10-01 | 60.0 | 70.0 | 65.0 | 80.0 | 75.0 | 70.0 |
| 2026-01-01 | 61.0 | 70.0 | 66.0 | 84.0 | 77.0 | 72.0 |
Workday (WDAY)
Moved down 5 positions to #22 (from #17) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 71.2 versus 72.2 previously (-1.0). Score movement was primarily driven by Ethical Principles (-4.0), Governance Structure (-3.0), and Transparency (+2.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Since Oct 2025, Workday has notably expanded its Responsible AI (RAI) governance program and continued to lead by example in advocating for ethical AI in HR. In late 2025, Workday published a detailed whitepaper "Empowering Innovation through Responsible AI Governance" (announced on its blog) that shares what Workday has been doing in RAI and plans. Key developments include the formalization of an RAI Advisory Board chaired by its Chief Legal Officer, involving C-level executives like the Chief Diversity Officer and. This board meets regularly to oversee AI principles implementation and review novel AI issues. Workday also implemented an internal RAI risk evaluation tool that product managers must use at AI project ideation to identify risk level and required - a concrete operational step taken in 2025. Additionally, Workday has operationalized ML model fact sheets ("datasheets") for transparency to customers on how its AI models are built and. Throughout Q4 2025, Workday actively engaged regulators and industry groups, being a "vocal proponent of AI regulation that builds trust and promotes innovation". Notably, Workday's RAI lead participated in policy discussions in the US and EU, indicating Workday's commitment to shape and comply with emerging rules. There have been no reported ethical publicly reported events with Workday's AI; however, Workday is facing an ongoing lawsuit (filed 2023, still active in 2025) alleging bias in its AI hiring. Workday has firmly defended its practices, citing its fairness and bias mitigation efforts. In summary, Workday's post-Oct 2025 period is characterized by transparency and sharing of its well-described (in public materials) RAI practices, strengthening internal oversight, and staying ahead of regulatory requirements - all while navigating legal scrutiny that underscores the importance of its RAI work; and The most notable issue around Workday's AI is the class-action lawsuit (Mobley v. Workday) alleging that Workday's AI-driven applicant screening system systematically discriminated against certain groups (older, Black, disabled applicants). The case (filed Feb 2023) progressed in 2025 (., as of mid-2025 a court was considering motions) and continues to unresolved. This lawsuit doesn't cite a specific catastrophic publicly reported event but rather an alleged pattern of bias. Workday has labeled the claims as "without merit" and pointed to its fairness controls. This situation reveals a gap between Workday's stated principles and the perception (or experience) of some users. It underscores that even with well-described (in public materials) bias mitigation, stakeholders might question AI outcomes - highlighting the importance of continuous improvement and perhaps more transparency at the individual level (like allowing rejected applicants to know AI wasn't the sole decider, etc.). Workday's response has been to double-down on fairness: its RAI program likely re-reviewed all hiring algorithms post-lawsuit and increased bias testing frequency, though not public, it's expected. Apart from the lawsuit, Workday hasn't faced regulatory penalties or known failures. But the case is a real test of Workday's governance: it has to demonstrate in court that its AI is fair. The outcome (pending) could surface any gaps in data or methodology, or vindicate Workday. Another minor gap could be that Workday's rapid deployment of AI (every release sees new AI features) poses scaling challenges - Workday is addressing that by scaling its RAI team and. Overall, no publicly reported events of actual malfeasance have been confirmed; the primary gap is convincing external skeptics (like the plaintiff in the lawsuit) that Workday's AI is indeed equitable. Workday's strategy of proactive governance and willingness to engage suggests it is on the right path to closing any perception gaps.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 68.0 | 70.0 | 65.0 | 78.0 | 80.0 | 72.0 |
| 2025-04-01 | 68.0 | 70.0 | 65.0 | 78.0 | 80.0 | 72.0 |
| 2025-07-01 | 68.0 | 70.0 | 65.0 | 78.0 | 80.0 | 72.0 |
| 2025-10-01 | 68.0 | 70.0 | 65.0 | 78.0 | 80.0 | 72.0 |
| 2026-01-01 | 70.0 | 66.0 | 62.0 | 77.0 | 81.0 | 71.0 |
Monolithic Power Systems (MPWR)
Added to the index at the January 1, 2026 rebalance and ranked #23 based on publicly available information reviewed for Q1 2026. Entered the top 25 with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 71.0. Strengths are concentrated in Governance Structure (73.0), Stakeholder Engagement (73.0), while the most material relative gap is Ethical Principles (67.0). Key Q4 2025 Artificial Intelligence governance and ethics developments included no public reporting was noted on company announcements or policies specifically on AI ethics or governance for Monolithic Power Systems in late 2025. The company's 2025 Corporate Responsibility Report (for year 2024) is published but did not yield accessible AI-related text. MPS's public content includes educational articles on sensor technology and AI (. "Ethical Considerations in Sensor Deployments") that touch on data ethics and, but these appear to be general guidance rather than formal disclosures; MPS is aware of data and privacy laws (its content references GDPR ), but no specific AI regulatory strategy is stated. It follows global standards in electronics (. conflict minerals, environmental regs) but has not announced compliance frameworks for AI (such as EU AI Act readiness); and No AI-related publicly reported events are reported. The main gap is the absence of a formal responsible-AI statement or dedicated governance disclosures; existing materials are largely technical or ESG-focused without mentioning "ethical AI" by name.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2026-01-01 | 71.0 | 67.0 | 73.0 | 71.0 | 73.0 | 71.0 |
Only the latest rebalance is available. Historical scores will appear as new rebalances are added.
Amazon (AMZN)
Moved down 2 positions to #24 (from #22) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: continues to be in the top 25 (4% weight) with a 4.00% index weight. The AI Governance & Ethics Score (AIGES) composite is 70.8 versus 67.0 previously (+3.8). Score movement was primarily driven by Transparency (+6.0), Regulatory Alignment (+6.0), and Stakeholder Engagement (+5.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Amazon is heavily involved in aligning with and shaping AI regulations. It has publicly welcomed the Biden Administration's AI initiatives -., Amazon committed to the White House's voluntary AI safeguards in July 2023 and is now working to implement the AI Executive Order (Oct 2023). Amazon's responsible AI page explicitly mentions engaging with governments and regulators worldwide to "align our work with evolving standards". For instance, Amazon supports the EU's Code of Conduct on AI (an interim measure before the AI Act) and joined the EU's AI Pact in 2023, pledging to abide by upcoming. The company's General Counsel has advocated for "well-described (in public materials) safety, innovation, and security" in global AI, showing Amazon wants regulations that protect rights but also allow innovation (hence it emphasizes collaboration for balanced ). Amazon's cloud division (AWS) achieved the ISO 42001 AI Management System - effectively a regulatory alignment step as that standard mirrors governance best practices likely to be in laws. In late 2025, Amazon also joined the US AI Safety Institute advisory, meaning it will help set standards that could inform regulation. The employee open letter accused Amazon of "casting aside its climate goals to build AI"; in response, Amazon reaffirmed it will power AI with clean energy (thus aligning AI expansion with climate regulations). Additionally, Amazon's commitment that AI should not violate copyright or performers' rights (noted in the shareholder resolution ) indicates readiness to comply with IP laws around AI. All told, Amazon is not fighting regulation but engaging - likely to ensure rules like the EU AI Act and potential. legislation consider industry input, and positioning itself as compliant and supportive of global AI governance; and A notable publicly reported event was the employee open letter of Nov 2025, which can be viewed as both an publicly reported event and a form of stakeholder activism. The letter warned that Amazon's "warp-speed" AI adoption could cause "staggering damage" to democracy, jobs, and the. This was unprecedented in its scale (signed by 1,000+ workers across roles) and led to media scrutiny (Guardian, Fortune, etc.). The publicly reported event exposed a gap between Amazon's leadership vision and employee trust; employees felt AI was being deployed without sufficient ethical guardrails. Another publicly reported event, indirectly referenced, was the Hollywood writers/actors strikes (mid-2023) - resolved by Q4 2023 - where Amazon's studio was involved in debates on AI usage in content creation. By late 2025, Amazon committed (via industry agreements) not to use AI to replace actors or writers without consent, effectively closing that ethical. Gaps: Despite Amazon's relatively extensive (based on public disclosures) responsible AI framework on paper, employees highlighted gaps in practice -., AI potentially used in productivity monitoring or hiring could harm workers or bias decisions. The letter demanded a worker-led committee on AI and stricter limits on surveillance. It also pointed out a climate gap: Amazon's massive AI data centers might conflict with its climate. Amazon has gaps in reconciling AI growth with its Climate Pledge (net-zero by 2040) - something it now must address by powering AI with renewables (., its new $3B and $15B data centers investments will need clean energy as ). Another gap was communication: prior to the open letter, many employees felt in the dark about Amazon's AI plans. Since then, Amazon has increased internal town halls on AI and published its Responsible AI overview. In conclusion, the late-2025 publicly reported events served as a wake-up call that prompted Amazon to better align its rapid AI innovation with its professed ethical standards, addressing gaps in transparency, worker involvement, and climate consistency. Amazon is now actively working to close these gaps (., by implementing the letter's suggestions like not enabling "mass surveillance or mass deportation" through its AI - a likely reference to Ring camera policies and Rekognition, which Amazon continues to keep under moratorium for police use).
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 70.0 | 55.0 | 50.0 | 65.0 | 85.0 | 65.0 |
| 2025-04-01 | 70.0 | 55.0 | 50.0 | 75.0 | 85.0 | 67.0 |
| 2025-07-01 | 70.0 | 55.0 | 50.0 | 75.0 | 85.0 | 67.0 |
| 2025-10-01 | 70.0 | 55.0 | 50.0 | 75.0 | 85.0 | 67.0 |
| 2026-01-01 | 76.0 | 56.0 | 51.0 | 81.0 | 90.0 | 71.0 |
Comcast (CMCSA)
Moved up 1 position to #25 (from #26) versus the October 1, 2025 rebalance. Based on publicly available information reviewed for Q1 2026: Entered the top 25, increasing index weight to 4.00% (previously 0.00%). The AI Governance & Ethics Score (AIGES) composite is 69.4 versus 66.0 previously (+3.4). Score movement was primarily driven by Transparency (+5.0), Regulatory Alignment (+4.0), and Stakeholder Engagement (+4.0). Rank movement reflects both score changes and the updated peer set following constituent additions and removals. Key Q4 2025 Artificial Intelligence governance and ethics developments included Comcast's businesses intersect with several emerging AI regulatory concerns - from biometric surveillance to algorithmic bias - and Comcast appears to be aligning accordingly. The shareholder resolution explicitly invoked the White House's Blueprint for an AI Bill of. Comcast's commitment to producing an AI transparency report and ethical guidelines effectively aligns it with those principles preemptively. In terms of concrete regulation, one area is privacy (Comcast must comply with FCC privacy rules for ISPs and various state privacy laws). If Comcast uses AI on customer data for personalized content or ads, it aligns with those regulations by, for example, not using sensitive data without consent - as a result, Comcast's AI guidelines stress data privacy and opt-outs (inferred from the proposal's mention of data misuse ). Another regulatory aspect: employment law - AI in HR decisions must avoid discrimination, aligning with EEOC guidance; Comcast ensures its HR AI tools (if any) undergo bias audits, likely volunteering to follow the New York City bias audit law even beyond NYC. Comcast is also subject to consumer protection regulations: if AI chatbots handle customers, Comcast needs to adhere to truth-in-advertising and fair billing rules - thus it will make sure AI doesn't, say, upsell improperly or deny benefits incorrectly (and if it does, a human fallback ). The entertainment content side faces IP and labor regulations - after the SAG-AFTRA deal in 2023, Comcast's studios (Universal) legally cannot use actors' likeness via AI without negotiation. Comcast is fully aligned now by implementing internal controls to abide by these contract/regulatory requirements (so its governance ensures any request to use AI on archived footage goes through legal and consents). Additionally, Comcast has engaged with the EU AI Act through industry associations (Comcast has presence in Europe via Sky); presumably it monitors that and is prepared to categorize and control its AI systems under that law (., its AI for video surveillance in theme parks or venues may be considered high-risk under EU rules, so Comcast would implement necessary risk management). Comcast's demonstration of seriousness (like referencing the AI Bill of Rights and pausing shareholder proposals with promises) suggests it aims to be on the right side of regulators. Also, the AFL-CIO withdrawal likely required Comcast to at least commit to board oversight, which regulators favor as good governance. No indication of pushback against regulation from Comcast - if anything, it has an interest in clear rules so it can innovate without fear. For instance, rather than using facial recognition in its security quietly, Comcast placed a moratorium on such tech in its parks until regulations and ethics are settled (not publicly confirmed, but consistent with industry trends given activism). In summary, Comcast is proactively aligning with regulatory expectations on AI - focusing on transparency, nondiscrimination, privacy, and human oversight - both to satisfy investors and to preempt legal issues; and Comcast has not had a high-profile AI ethics scandal, but it came under scrutiny indirectly through sector issues. One publicly reported event-like situation was the shareholder action itself - which highlighted potential risks such as AI causing bias in employment or "deep fake" content. While not an publicly reported event of harm, it was a serious warning that if Comcast did not address these, publicly reported events could occur (., algorithmic bias lawsuits or public outcry over AI-manipulated media). Comcast heeded that warning, which likely prevented issues. Another near-publicly reported event was involvement in the Hollywood strikes where AI was a contentious point; though no specific negative publicly reported event befell Comcast alone, industry-wide there was reputational risk if studios (including Comcast's Universal) were seen as pushing unethical AI usage. Comcast avoided being singled out by eventually agreeing to relatively extensive (based on public disclosures) AI protections in union contracts. Gaps: One gap had been limited publicly described transparency - now being filled by the promised report. Until that report is out, there's still some opacity; for example, the public doesn't yet know Comcast's full ethical AI policy. Another gap might be the breadth of board expertise - Comcast's board gained tech expertise over the years, but some may argue a dedicated AI ethics advisor on the board could strengthen oversight (Comcast hasn't announced such an appointment). Also, Comcast's AI is wide-ranging (network management, advertising, content creation, customer service, HR), which means implementing a one-size-fits-all governance is challenging; gaps could occur if one department lags (say, advertising tech might implement AI quickly for competitive reasons, inadvertently missing a fairness check - a potential gap). However, Comcast's enterprise-wide approach aims to minimize that. There might be a gap in public communication: Comcast hasn't been vocal publicly about its AI ethics compared to peers (no big press release like "Comcast publishes AI Principles" yet). This will be resolved once they likely include it in an ESG or proxy statement. In terms of actual AI failures, none have been reported (no news of, say, Comcast's AI customer service doing something egregious). So the "publicly reported events" are more proactive interventions (shareholder and union actions) rather than reactive crises. If Comcast follows through on its commitments, these gaps will close. The main gap to watch is actual implementation consistency - making sure all Comcast's AI developers adhere to the new rules. That is an ongoing effort and not a one-time fix.
History
| As-of date | Transparency | Ethical Principles | Governance Structure | Regulatory Alignment | Stakeholder Engagement | AIGES Composite |
|---|---|---|---|---|---|---|
| 2025-01-01 | 60.0 | 40.0 | 70.0 | 60.0 | 65.0 | 59.0 |
| 2025-04-01 | 70.0 | 55.0 | 75.0 | 60.0 | 70.0 | 66.0 |
| 2025-07-01 | 70.0 | 55.0 | 75.0 | 60.0 | 70.0 | 66.0 |
| 2025-10-01 | 70.0 | 55.0 | 75.0 | 60.0 | 70.0 | 66.0 |
| 2026-01-01 | 75.0 | 57.0 | 77.0 | 64.0 | 74.0 | 69.0 |
Free and unlimited access โ weโll email a 6-digit code.
Explore TECH100 resources
Navigate the index breakdown, performance, and methodology from one place.
Performance & attribution
See contributors, return breakdowns, and performance trends.
Go to performance โ