Logo Sequence

Research Scientist, Large Language Model Interpretability and Alignment

Google

Minimum qualifications:

  • PhD degree in Computer Science, a related field, or equivalent practical experience.
  • 1 year of experience in coding, including writing ML code.
  • One or more scientific publication submission(s) for conferences, journals, or public repositories in venues such as NeurIPS, ICLR, ICML, EMNLP, NACL, ACL, or similar.

 

Preferred qualifications:

  • 2 years of experience in coding with Python.
  • 1 year of experience owning and initiating research agendas.
  • Experience working with LLMs/Transformer models.
  • Experience with interpretability methods such as training data attribution and saliency methods.
  • Experience with serving Machine Learning (ML) models.

 

About the job

As an organization, Google maintains a portfolio of research projects driven by fundamental research, new product innovation, product contribution and infrastructure goals, while providing individuals and teams the freedom to emphasize specific types of work. As a Research Scientist, you'll setup large-scale tests and deploy promising ideas quickly and broadly, managing deadlines and deliverables while applying the latest theories to develop new and improved products, processes, or technologies. From creating experiments and prototyping implementations to designing new architectures, our research scientists work on real-world problems that span the breadth of computer science, such as machine (and deep) learning, data mining, natural language processing, hardware and software performance analysis, improving compilers for mobile platforms, as well as core search and much more.

As a Research Scientist, you'll also actively contribute to the wider research community by sharing and publishing your findings, with ideas inspired by internal projects as well as from collaborations with research programs at partner universities and technical institutes all over the world.

People + AI Research (PAIR) is a team in Google Research focused on Human-centered research and design to make AI partnerships productive and fair. Our team does fundamental research and creates frameworks for design to drive a human-centered approach to AI. PAIR engages with the academic community, Google's products, and the general public on topics such as interactive ML and developing insights about emergent capabilities and underlying mechanisms of foundation models.

In this role, you will research on ML algorithms and interpretability to help understand and control ML models, leverage small datasets, and enable applications of large language-driven models.
The US base salary range for this full-time position is $136,000-$200,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

 

Responsibilities

  • Research alignable neural algorithms, advance research on mechanistic interpretability, and help invent new ML architectures.
  • Explore sample-efficient control methods and build the capacity into new algorithms.
  • Write ML code (e.g., in JAX, PyTorch, TFJS, Haiku, etc.).
  • Coordinate research efforts within PAIR and outside, manage OKRs and priorities, and drive thoughtful research directions towards alignable neural algorithms.
  • Write research papers for publication and present internally and externally.

 

 

Location
Flag of United States New York, NY USA
Salary
$136,000-$200,000
Date posted
March 20, 2024