The deadline for applications to this role is 23:00 BST on Sunday 29 September. As a Machine Learning Research Scientist working on safety cases, you will conduct foundational research to help take this ambitious new pillar of AISI's work forward.
By building our understanding of how AI safety cases could be developed, you will help to expand AISI's programme of technical work beyond the existing workstreams focused on evaluating model capabilities and safeguards.
Safety cases are already used as standard in other industries and are structured arguments that a system is unlikely to cause significant harm if deployed in a particular setting.
As the AI frontier develops, we expect safety cases could become an important tool for mitigating AI safety risks, whereby AI companies set out detailed arguments for how they have ensured their models are safe.
We believe it is possible to significantly develop our understanding of what a good safety case would look like now, even though the field is far from knowing how to write a detailed safety case.
In this role, you would help push this understanding forward, both by direct technical research and via technical collaborations with external researchers and organizations.
Key areas to cover include open problems that must be overcome to increase our confidence within specific safety agendas, and agenda-specific evaluations such as control and alignment evaluations.
The role offers a unique opportunity to work closely alongside the world's best technical talent, including Research Director Geoffrey Irving who leads the safety case workstream, as well as talented Policy / Strategy leads and other Research Engineers and Research Scientists.
You will also collaborate with external topic-level experts, partner organizations and policymakers to coordinate and build on external research.
There will be significant scope to contribute to the overall vision and strategy of the safety case team as an early hire.
We view work on safety cases at AISI as a critical component of the overall safety story, alongside our existing workstreams focused on evaluations of dangerous capabilities and safeguard effectiveness.
RESPONSIBILITIES This role offers the opportunity to progress deep technical work at the frontier of AI safety and governance.
Your work would likely include:
Detailed research on safety cases.
We think it's important to get into the details, so this might mean trying to write a comprehensive safety case based on evals for a certain hazard, or developing novel methods for AI control or formal verification. High-level research into safety case material (e.g.
what methods might be used, and what properties one would need for these arguments to be correct). Input into our strategy, which focuses on: how to get more safety case work to occur, what form they should take, and how to improve the chance that the results improve safety. Collaboration with external partners (e.g.
labs, academics) on joint research into safety cases. Research organizational work (assembling teams, organizing workshops) to create an environment where safety case work occurs. Person Specification We are interested in hiring individuals at a range of seniority and experience within this team, including in Senior ML Research Scientist positions.
Calibration on final title, seniority and pay will take place as part of the recruitment process.
We encourage all candidates who would be interested in joining to apply.
You may be a good fit if you have some of the following skills, experience and attitudes:
Relevant machine learning research experience in industry, relevant open-source collectives, or academia in a field related to machine learning, AI, AI security, or computer security. Broad knowledge of technical safety methods (T-shaped: some deep knowledge, lots of shallow knowledge). Strong writing ability. Motivated to conduct technical research with an emphasis on direct policy impact rather than exploring novel ideas. Ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem. Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team's success and find new ways of getting things done within government. Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done. Comprehensive understanding of large language models (e.g.
GPT-4).
This includes both a broad understanding of the literature, as well as hands-on experience with things like pre-training or fine-tuning LLMs. Direct research experience (e.g.
PhD in a technical field and/or spotlight papers at NeurIPS/ICML/ICLR). Experience working with world-class multi-disciplinary teams, including both scientists and engineers (e.g.
in a top-3 lab). Salary & Benefits We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range.
We will discuss and calibrate with you as part of the process.
The full range of salaries available is as follows:
L3: £65,000 - £75,000 L4: £85,000 - £95,000 L5: £105,000 - £115,000 L6: £125,000 - £135,000 L7: £145,000 The Department for Science, Innovation and Technology offers a competitive mix of benefits including:
A culture of flexible working, such as job sharing, homeworking and compressed hours. Automatic enrolment into the Civil Service Pension Scheme, with an average employer contribution of 27%. A minimum of 25 days of paid annual leave, increasing by 1 day per year up to a maximum of 30. An extensive range of learning & professional development opportunities, which all staff are actively encouraged to pursue. Access to a range of retail, travel and lifestyle employee discounts. The Department operates a discretionary hybrid working policy, which provides for a combination of working hours from your place of work and from your home in the UK.
The current expectation for staff is to attend the office or non-home based location for 40-60% of the time over the accounting period. Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI (including non-technical staff), conversations with your workstream lead.
The process will culminate in a conversation with members of the senior team here at AISI.
Candidates should expect to go through some or all of the following stages once an application has been submitted:
Initial interview Second interview Technical take home test Third interview and review of take home test Final interview with members of the senior team Required Experience We select based on skills and experience regarding the following areas:
Research science Frontier model architecture knowledge Frontier model training knowledge AI safety research knowledge Written communication Verbal communication Safety cases or safety systems knowledge Research problem selection Desired Experience We additionally may factor in experience with any of the areas that our work-streams specialize in:
Autonomous systems Cyber security Chemistry or Biology Safeguards Safety Cases Societal Impacts
#J-18808-Ljbffr
Ref 77164 Vacancy title Lead Process Safety Engineer - Nuclear Function(s) Engineering Contract type Full time permanent Region National Location(s) Manc...
Oilandgas.Org.Uk - England
Published 12 days ago
About the Team As AI systems become more advanced, the potential for misuse of their cyber capabilities may pose a threat to the security of organisations an...
Ai Safety Institute - England
Published 12 days ago
In the BioAI department, we advance the boundaries of healthcare and medical science through a combination of biology and artificial intelligence expertise. ...
All Together Now Preschool & Childcare - England
Published 12 days ago
Leading AJ100 studio. Senior role with a growing team. A top AJ100 studio is looking to hire an Architectural Technologist for their London flagship studio....
Hays - England
Published 12 days ago
Built at: 2025-01-18T12:27:31.276Z