Browse Jobs
Home Jobs
College University Jobs Near Me
Viewing Currently

College University Jobs Near Me

Showing 1 - 10 of 9411 results

AI / Emerging Tech Security Analyst
Alignerr
Miami, FL
$40.00 – $60.00/hr Apply Now
AI / Emerging Tech Security Analyst
Alignerr
Seattle, WA
$40.00 – $60.00/hr Apply Now
AI / Emerging Tech Security Analyst
Alignerr
New York, NY
$40.00 – $60.00/hr Apply Now
AI / Emerging Tech Security Analyst
Alignerr
Boston, MA
$40.00 – $60.00/hr Apply Now
AI / Emerging Tech Security Analyst
Alignerr
Denver, CO
$40.00 – $60.00/hr Apply Now
Event Assistant
Platinum Coastal Group
Brandon, FL
$48k – $62k/year Apply Now
Customer Service & E-Commerce Supervisor - Full Time
Whole Foods Market
Chicago, IL
$17.50 – $30.20/hr Apply Now
Threat Intelligence Analyst
Alignerr
New York, NY
$35.00 – $60.00/hr Apply Now
Marketing Assistant
Platinum Coastal Group
Winter Park, FL
$48k – $60k/year Apply Now
VP Professional Services North America
Earnix
Boston, MA
$250k – $320k/year Apply Now
ORIGINAL JOB LISTING

AI / Emerging Tech Security Analyst

Alignerr
Miami, FL
Remote
Contract
New
$40 - $60/hr

This job is currently accepting applications. Review the details and apply today.

About this role

What if your security instincts could directly shape how the world's most powerful AI systems defend themselves against attack? We're looking for AI Security Analysts to stress-test frontier models --- probing for weaknesses, evaluating adversarial scenarios, and helping ensure that cutting-edge AI remains safe, reliable, and resistant to misuse.

This is a fully remote, flexible contract role built for security professionals who are curious about how modern AI systems can be exploited, manipulated, or pushed beyond their intended boundaries. If you've ever wondered what happens when someone tries to break an LLM --- this is your chance to find out, get paid for it, and make AI safer in the process.

Organization: Alignerr Type: Hourly Contract Location: Remote

Responsibilities

  • Analyze real-world AI and LLM security scenarios to understand how models behave under adversarial or unexpected conditions
  • Review and evaluate cases involving prompt injection, data leakage, model abuse, and system misuse
  • Complete task-based assignments independently on your own schedule
  • Classify security issues by real-world impact and likelihood, and recommend appropriate mitigations
  • Help evaluate and improve AI system behavior so it remains safe, aligned, and robust against attack
  • Apply threat modeling principles to emerging AI technologies and architectures

Required Skills & Experience

  • Solid understanding of security threat modeling --- and genuine curiosity about how it applies to AI
  • Background in cybersecurity, information security, or a closely related technical field
  • Hands-on experience with penetration testing, red teaming, or vulnerability research
  • Familiarity with large language models, AI APIs, or prompt engineering
  • Analytical and precise when evaluating complex systems, edge cases, and potential failure modes
  • Comfortable working through ambiguous, open-ended scenarios with a structured mindset
  • Self-motivated and reliable when working independently without supervision
  • Nice to Have

Benefits

  • Benefits details are shared by the employer during the hiring process.

Environment

This is a fully remote, flexible contract role built for security professionals who are curious about how modern AI systems can be exploited, manipulated, or pushed beyond their intended boundaries

Position Overview Organization: Alignerr Type: Hourly Contract Location: Remote Commitment: 10--40 hours/week What You'll Do Analyze real-world AI and LLM security scenarios to understand how models behave under adversarial or unexpected conditions Review and evaluate cases involving prompt injection, data leakage, model abuse, and system misuse Classify security issues by real-world impact and likelihood, and recommend appropriate mitigations Apply threat modeling principles to emerging AI technologies and architectures Help evaluate and improve AI system behavior so it remains safe, aligned, and robust against attack Complete task-based assignments independently on your own schedule Who You Are Background in cybersecurity, information security, or a closely related technical field Solid understanding of security threat modeling --- and genuine curiosity about how it applies to AI Analytical and precise when evaluating complex systems, edge cases, and potential failure modes Comfortable working through ambiguous, open-ended scenarios with a structured mindset Self-motivated and reliable when working independently without supervision Nice to Have Hands-on experience with penetration testing, red teaming, or vulnerability research

Familiarity with large language models, AI APIs, or prompt engineering Background in application security, cloud security, or ML systems Prior exposure to AI safety, alignment research, or responsible disclosure Experience writing clear, structured security reports or risk assessments Why Join Us Work directly on frontier AI systems alongside leading research labs Fully remote and flexible --- work when and where it suits you Freelance autonomy with the structure of meaningful, task-based work Contribute to AI safety work that has a real impact on how the world's most advanced models behave Potential for ongoing work and contract extension as new projects launch

Ready to apply?

You’ve reviewed the role. Take the next step.

Why are you reporting this?

No Result Found - Want Us to keep Searching?

Try changing your keyword, or enter your email and we'll let you know when something relevant gets posted.

Get More from Handyhubb

Special discount

Inside Tips

Early Access to new features

You're In!

We've added you in the list.

Expect early access, insider updates, and exclusive offers straight to your inbox

Got it.

We'll let you know when something pops up.