About The Company
Ford is a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet. We are here to provide the means for every person to move and pursue their dreams, serving as a bridge between personal freedom and the future of mobility. In that pursuit, our 186,000 employees around the world help to set the pace of innovation every day.
Job Role : AI/ML Expert
The role of AI/ML Expert is responsible for designing and building enterprise-wide AI Security framework to protect the home-grown and public AI and ML/DL systems against cyber threats, adversarial attacks, and data breaches.
Job Description:
This specialist combines expertise in cybersecurity and AI/ML to design, implement, and maintain security frameworks, ensuring the integrity, confidentiality, and compliance of AI-driven solutions throughout their lifecycle. This also involves collaboration with cross-functional, stakeholders and AI Engineers to build and deploy enterprise-wide AI security framework.
Responsibilities:
- Design and maintain structured guidelines and controls to secure AI systems, covering data protection, model security, and compliance requirements.
- Evaluate and utilize established frameworks such as Google’s Secure AI Framework (SAIF), NIST AI Risk Management Framework, or the Framework for AI Cybersecurity Practices (FAICP) as references or baselines.
- Identify, assess, and mitigate security risks specific to AI, including adversarial attacks, data poisoning, model inversion, and unauthorized access.
- Conduct regular vulnerability assessments and penetration testing on AI models and data pipelines.
- Ensure data used in AI systems is encrypted, anonymized, and securely stored.
- Implement robust access controls (e.g., RBAC, ABAC, Zero Trust) for sensitive AI data and models
- Protect AI models from tampering, theft, or adversarial manipulation during training and deployment.
- Monitor and log AI system activity for anomalies or security incidents
- Develop and enforce policies to ensure AI systems adhere to industry regulations, ethical standards, and organizational governance requirements.
- Promote transparency, explainability, and fairness in AI models.
- Establish real-time monitoring and advanced threat detection for AI systems.
- Develop and maintain an AI incident response plan for prompt mitigation and recovery.
- Educate teams on AI security best practices and foster a security-aware culture.
- Collaborate with IT, data science, compliance, and business units to align AI security with organizational goals.
Qualification:
- Technical Skills:
- Strong understanding of AI/ML concepts, architectures, and security challenges.
- Strong programming skills in Python, R, or similar languages.
- Strong experience in Google Cloud Platform (GCP) or equivalent.
- Solid understanding of machine learning algorithms, neural networks, NLP, and computer vision.
- Experience with cloud AI/ML services and deployment pipelines is a plus.
- Experience with security frameworks (e.g., SAIF, NIST, FAICP) and regulatory compliance.
- Proficiency in data protection techniques, encryption, and secure access management.
- Familiarity with adversarial machine learning, model hardening, and input sanitization.
- Knowledge of incident response, monitoring tools, and threat intelligence platforms.
- Excellent communication and documentation skills for policy development and stakeholder engagement.
Experience:
- Bachelor’s or Master’s degree in computer science, Data Science, Engineering, or a related field.
- 5+ years in AI/ML roles, including hands-on model development and deployment.
- Track record of delivering AI solutions that drive business value.