Welcome to the AI Security and Risk Association (AISRA) at UF 🐊!

The AI Security and Risk Association (AISRA) is a mission-driven student organization at the University of Florida. We are focused on identifying and developing talented students to address the critical security and alignment challenges of advanced AI. Our work is defined by its rigor, focus on real-world impact, and commitment to upholding national security.

Quick Resources:

<aside> 🗞️ Newsletter: Stay updated on AISRAs journey and it’s AI security research at ufaisecurity.substack.com

</aside>

<aside> <img src="attachment:be19a1c7-47c0-478c-a4c6-4dd873eb530a:Marcells_(24).png" alt="attachment:be19a1c7-47c0-478c-a4c6-4dd873eb530a:Marcells_(24).png" width="40px" />

Discord: Use this link to get access to our interdisciplinary community.

</aside>

<aside> 🎟️

<aside> 📅

Why does AI Security and Risk Matter?

<aside> 🚨

The risks described in the dropdown above are concerning, yet they represent only a small subset of risk vectors associated with current state of the art models, **which are subject to improvement.** If advancements in AI capabilities continue without a corresponding acceleration in research on AI alignment methods, the widespread deployment of these increasingly powerful models will likely result in a catastrophic event of some kind. This prospect of this scenario poses a significant threat to both national and global security, which is why we established AISRA.

Our Mission:

We seek to contribute to this problem’s solution by pursuing two major paths:

Through the programs that we offer and the events that we will be hosting, we hope to generally cultivate situational awareness in the student body while funneling those capable of participating in the problem’s solution into programs that cultivate the knowledge and experience needed to professionally pursue one of the two paths.

Our Programs

Most students will likely become acquainted with us through the student-body facing events that we host. For those eager to learn more, we offer our introductory fellowships and paper reading groups. Ultimately, the org will connect highly motivated students who demonstrate a strong commitment with opportunities to conduct research under the guidance of our affiliated faculty as a part of our AI Security Scholars Research program.

Step 1: Learn the Basics

<aside>

Technical Path

</aside>

<aside>

🤖 Intro to AI Alignment Fellowship

Over 8 weeks, our fellowship offers a deep dive into AI alignment using the introductory curriculum from BlueDot Impact. Through collaborative weekly meetings, you will grapple with critical challenges such as understanding neural networks, correcting flawed AI goals, and learning how to elicit latent knowledge from advanced systems

</aside>

<aside>

Governance Path

</aside>

<aside>

🏛 Intro to AI Governance Fellowship

Over 8 weeks, our fellowship offers a deep dive into AI governance using the introductory curriculum from BlueDot Impact. Through collaborative weekly meetings, you will grapple with critical challenges such as understanding the promise and perils of advanced AI, evaluating current and proposed national and global AI policies.

</aside>

Step 2: Survey the Frontier