Jon Peck

Jon Peck

Seattle, Washington, United States
2K followers 500+ connections

About

Highly experienced tech professional with a strong background in software development…

Contributions

Activity

Join now to see all activity

Experience

  • GitHub Graphic

    GitHub

    Seattle, Washington, United States

  • -

  • -

  • -

    Seattle, Washington

  • -

    Seattle, WA

  • -

  • -

  • -

  • -

  • -

  • -

  • -

  • -

Education

Licenses & Certifications

Publications

  • Getting Started with Github for Startups

    GitHub

    GitHub for Startups provides access to GitHub Enterprise, a powerful platform you can use to manage your DevOps lifecycle. But what does that mean? What can you actually do on GitHub? How do you get started?
    And how can you get the most out of the tools available to your organization? Join us for a live session as we walk through the first steps with GitHub, an overview of how GitHub uses GitHub, and tips and best practices to have the best experience on GitHub.

    Other authors
    See publication
  • A checklist and guide to get your repository collaboration-ready

    GitHub

    What's the key to a thriving project? Other people! Learn how to invite contributions, and make your repo discoverable, understandable, secure, and friction-free with our collaboration-ready checklist.

    See publication
  • Securing and Accelerating the Software Factory: Have Your Cake and Eat It Too

    DeveloperWeek Management 2023

    Half of companies choose velocity over security, but these don’t need to be opposing options.

    By combining a developer-first approach to eliminating vulnerabilities with a collaborative AI-powered DevOps toolchain, we can:
    - double development velocity
    - quarter remediation times
    - increase developer happiness by 75%

    See publication
  • Increasing Developer Velocity with GitHub

    GitHub

    Increase developer velocity, redefine collaboration, and secure your code by automating entire workflows with GitHub Enterprise. Learn from Glenn Wester, Principal Solutions Engineer at GitHub, and Sarah Khalife, Principal Solutions Engineer at GitHub, about how to reduce context switching and lower costs in your organization. You’ll leave this session with an understanding of how leveraging innersourcing, AI, and GitHub’s developer-first tooling across your tech stack can give teams better…

    Increase developer velocity, redefine collaboration, and secure your code by automating entire workflows with GitHub Enterprise. Learn from Glenn Wester, Principal Solutions Engineer at GitHub, and Sarah Khalife, Principal Solutions Engineer at GitHub, about how to reduce context switching and lower costs in your organization. You’ll leave this session with an understanding of how leveraging innersourcing, AI, and GitHub’s developer-first tooling across your tech stack can give teams better workflows and streamlined processes.

    Other authors
    See publication
  • Managing your Tech Stack Complexity with GitHub

    GitHub

    Streamlining your tech stack’s complexity will help you finish projects faster and accomplish your business goals.

    Other authors
    See publication
  • Scaling your Startup with Lean and Modern DevOps Strategies

    Startup Grind

    GitHub is world-renowned for its ability to promote meaningful collaborative work, and to accelerate development with a minimum of risk. Learn some of our strategies for building a culture of innovation, while minimizing product risk and scaling rapidly, with lessons learned from our own history and the millions of projects & companies we support worldwide.

    See publication
  • Unlocking the key to organizational efficiency with InnerSource

    GitHub InFocus

    Application delivery teams are under pressure to deliver value to customers as quickly as possible. But if you find that your teams are often slowed down by a variety of organizational and technical barriers, this session is for you. Join GitHub's Field CTO, Philip Holleran, and Senior Technical Advocate, Jon Peck, for an interactive discussion that will dive into three common barriers to shipping software fast, their potential negative impacts, and how application teams can use GitHub…

    Application delivery teams are under pressure to deliver value to customers as quickly as possible. But if you find that your teams are often slowed down by a variety of organizational and technical barriers, this session is for you. Join GitHub's Field CTO, Philip Holleran, and Senior Technical Advocate, Jon Peck, for an interactive discussion that will dive into three common barriers to shipping software fast, their potential negative impacts, and how application teams can use GitHub Enterprise to overcome them.

    Other authors
    See publication
  • Propelling your DevOps to New Heights

    GitHub InFocus

    DevOps makes building and shipping software faster, friendlier, and more collaborative—and automates almost all of the process. Successful DevOps implementation depends on many factors to be successful, including collaboration, tooling, transparency, and measurement. Join Senior Technical Advocate at GitHub, John Peck, and Principal Solutions Engineer at GitHub, Glenn Wester, in this interactive session where we'll discuss best practices, efficient techniques, and challenges you might face when…

    DevOps makes building and shipping software faster, friendlier, and more collaborative—and automates almost all of the process. Successful DevOps implementation depends on many factors to be successful, including collaboration, tooling, transparency, and measurement. Join Senior Technical Advocate at GitHub, John Peck, and Principal Solutions Engineer at GitHub, Glenn Wester, in this interactive session where we'll discuss best practices, efficient techniques, and challenges you might face when rolling out DevOps in the Enterprise.

    Other authors
    See publication
  • Training and Deploying an ML Model as a Microservice

    Manning

    In this liveProject, you’ll fill the shoes of a developer for an ecommerce company. Customers provide reviews of your company’s products, which are used to give a product rating. Until now, assigning a rating has been manual: contractors read each review, decide whether it’s positive or negative, and assign a score. Your boss has decided that this is too expensive and time consuming. Your mission is to automate this process, dramatically increasing the speed of rating calculations, and…

    In this liveProject, you’ll fill the shoes of a developer for an ecommerce company. Customers provide reviews of your company’s products, which are used to give a product rating. Until now, assigning a rating has been manual: contractors read each review, decide whether it’s positive or negative, and assign a score. Your boss has decided that this is too expensive and time consuming. Your mission is to automate this process, dramatically increasing the speed of rating calculations, and decreasing the cost to your company. To complete this project you will have to train a machine learning model to recognize and rank positive and negative reviews, expose this model to an API so your website and partner sites can benefit from automatic ratings, and build a small webpage using FaaS, containers, and microservices that can run your model for demonstration.

    See publication
  • OS for AI: Serverless, Productionized Machine Learning

    DeveloperWeek Austin

    Machine Learning has been advancing rapidly, but only a few contributors are focusing on the infrastructure and scaling challenges that come with it. When you have thousands of model versions, each written in any mix of frameworks (Python/R/Java/Ruby, PyTorch/SciKit/Caffe/Tensorflow etc), how do you efficiently deploy them as elastic, scalable, secure APIs with 10ms of latency and GPU access?

    We’ve built, deployed, and scaled thousands of algorithms and machine learning models, using…

    Machine Learning has been advancing rapidly, but only a few contributors are focusing on the infrastructure and scaling challenges that come with it. When you have thousands of model versions, each written in any mix of frameworks (Python/R/Java/Ruby, PyTorch/SciKit/Caffe/Tensorflow etc), how do you efficiently deploy them as elastic, scalable, secure APIs with 10ms of latency and GPU access?

    We’ve built, deployed, and scaled thousands of algorithms and machine learning models, using every kind of framework. We’ve seen many of the challenges faced in this area, and in this talk I’ll share some insights into the problems you’re likely to face, and how to approach solving them.

    In brief, we’ll examine the need for, and implementations of, a complete “Operating System for AI”: a common interface for different algorithms to be used and combined, and a general architecture for serverless machine learning which is discoverable, versioned, scalable and sharable.

    See publication
  • Serverless Functions and Machine Learning: Putting the AI in APIs

    API World + AI Dev World

    Machine Learning has become an integral part of all major apps. From face recognition to product recommender engines, emotion detection to automated analytics, every product you touch contains, or can benefit from, AI -- so why is it still so difficult to identify, tune, and integrate Machine Learning?We'll investigate a number of approaches to this problem, from off-the-shelf APIs to options for training and hosting your own ML models. You'll walk away ready to hook thousands of different…

    Machine Learning has become an integral part of all major apps. From face recognition to product recommender engines, emotion detection to automated analytics, every product you touch contains, or can benefit from, AI -- so why is it still so difficult to identify, tune, and integrate Machine Learning?We'll investigate a number of approaches to this problem, from off-the-shelf APIs to options for training and hosting your own ML models. You'll walk away ready to hook thousands of different ready-to-run models into your app, or to productionize your own models in an on-demand, autoscaled, language-agnostic environment.

    See publication
  • The OS for AI: How serverless computing enables the next generation of machine learning

    O'Reilly (OSCON)

    Machine learning has been advancing rapidly, but only a few contributors are focusing on the infrastructure and scaling challenges that come with it. When you have thousands of model versions, each written in any mix of frameworks (Python, R, Java, and Ruby, PyTorch, SciKit, Caffe, and TensorFlow, etc.), it’s difficult to know how to efficiently deploy them as elastic, scalable, secure APIs with 10 ms of latency and GPU access.

    Algorithmia has seen many of the challenges faced in this…

    Machine learning has been advancing rapidly, but only a few contributors are focusing on the infrastructure and scaling challenges that come with it. When you have thousands of model versions, each written in any mix of frameworks (Python, R, Java, and Ruby, PyTorch, SciKit, Caffe, and TensorFlow, etc.), it’s difficult to know how to efficiently deploy them as elastic, scalable, secure APIs with 10 ms of latency and GPU access.

    Algorithmia has seen many of the challenges faced in this area. Jonathan Peck explores how the company built, deployed, and scaled thousands of algorithms and machine learning models using every kind of framework. You’ll learn some insights into the problems you’re likely to face and how to approach solving them. Jonathan examines the need for, and implementations of, a complete operating system for AI: a common interface for different algorithms to be used and combined, and a general architecture for serverless machine learning which is discoverable, versioned, scalable, and sharable.

    See publication
  • Productionizing Your Machine Learning Models with Serverless Microservices

    DeveloperWeek

    You've developed and trained your ML model, and it performs beautifully in your development environment -- but what happens when you move that into production, and it suddenly has to scale massively varying elastic workloads, compete with other models for memory and processing resources, or mesh with models deployed in other languages and frameworks? It isn't enough to simply fire up a machine instance, write a Flask wrapper, and call it a day: properly productionizing a model requires a deep…

    You've developed and trained your ML model, and it performs beautifully in your development environment -- but what happens when you move that into production, and it suddenly has to scale massively varying elastic workloads, compete with other models for memory and processing resources, or mesh with models deployed in other languages and frameworks? It isn't enough to simply fire up a machine instance, write a Flask wrapper, and call it a day: properly productionizing a model requires a deep understanding of container management, load balancing, CI/CD, dynamic resource allocation, and more. In this talk, we'll look at what your team does and does not need to build in order to move from weeks of deployment time to mere minutes, while preserving elasticity, low-latency, and flexibility.

    See publication
  • Serverless Functions and Machine Learning: Putting the AI in APIs

    NordicAPIs

    Machine Learning has become an integral part of all major apps. From face recognition to product recommender engines, emotion detection to automated analytics. Every product you touch contains, or can benefit from, AI — so why is it still so difficult to identify, tune, and integrate Machine Learning?

    We’ll investigate a number of approaches to this problem, from off-the-shelf APIs to options for training and hosting your own ML models. You’ll walk away ready to hook thousands of…

    Machine Learning has become an integral part of all major apps. From face recognition to product recommender engines, emotion detection to automated analytics. Every product you touch contains, or can benefit from, AI — so why is it still so difficult to identify, tune, and integrate Machine Learning?

    We’ll investigate a number of approaches to this problem, from off-the-shelf APIs to options for training and hosting your own ML models. You’ll walk away ready to hook thousands of different ready-to-run models into your app, or to productionize your own models in an on-demand, autoscaled, language-agnostic environment.

    See publication
  • Deploying your AI/ML investments

    ODSC East

    Over the next 18 months, companies will be completing the R&D phase of their AI/ML investments and will be deploying their models and algorithms to production. The proper execution of deploying your AI/ML models will separate the organizations who see an ROI on AI from those who don't. This talk will introduce the best practices of the tech companies already deploying, the tech stack that is needed, and the organization rhythms that are needed to be successful. This talk is ideal for engineers…

    Over the next 18 months, companies will be completing the R&D phase of their AI/ML investments and will be deploying their models and algorithms to production. The proper execution of deploying your AI/ML models will separate the organizations who see an ROI on AI from those who don't. This talk will introduce the best practices of the tech companies already deploying, the tech stack that is needed, and the organization rhythms that are needed to be successful. This talk is ideal for engineers and leadership to attend together.

    See publication
  • Deploying your Machine Learning models in the real world

    AI Camp / AI NEXTCon

    You've gathered and cleaned your data, tuned your hyperparameters, trained a model that works great... on your own laptop. Where do you go from here? We'll look at a few ways of productionizing your model, from Flask on VMs to turnkey serverless hosting.

    You'll leave with the ability to confidently and easily make your models available to anyone (or for your own private use) as an autoscaled, on-demand API!

    See publication
  • Making State-of-the-Art Algorithms Discoverable and Accessible to Everyone

    Heavybit

    You've trained machine learning models on your data, but how do you put them into production? When you have thousands of model versions, each written in any mix of frameworks (R/Java/Ruby/SciKit/Caffe/Tensorflow on GPUs etc), how do you efficiently deploy them as elastic, scalable, secure APIs with 10ms of latency?

    ML has been advancing rapidly, but only a few contributors are focusing on the infrastructure and scaling challenges that come with it. We've built, deployed, and scaled…

    You've trained machine learning models on your data, but how do you put them into production? When you have thousands of model versions, each written in any mix of frameworks (R/Java/Ruby/SciKit/Caffe/Tensorflow on GPUs etc), how do you efficiently deploy them as elastic, scalable, secure APIs with 10ms of latency?

    ML has been advancing rapidly, but only a few contributors are focusing on the infrastructure and scaling challenges that come with it. We've built, deployed, and scaled thousands of algorithms and machine learning models, using every kind of framework. We've seen many of the challenges faced in this area, and in this talk I'll share some insights into the problems you’re likely to face, and how to approach solving them.

    In brief, we’ll examine the need for, and implementations of, a complete "Operating System for AI": a common interface for different algorithms to be used and combined, and a general architecture for serverless machine learning which is discoverable, versioned, scalable and sharable.

    See publication
  • Algorithmia Learning Hub

    Algorithmia

    In this series of courses, you'll learn how to:

    - Make use of a gigantic catalog of Machine Learning functions
    - Manage your data and account
    - Deploy your own ML models and Serverless Functions on Algorithmia

    See publication
  • Intro to Serverless Computing

    INE

    This course will introduce students to the fundamentals of serverless computing. What does a serverless stack look like, where and how can we use it, what different forms can it take, and how will it reduce our Dev Ops overhead. We'll then progress to actual implementations on specific platforms, including AWS Lambda, Azure functions, Google Cloud Functions, and Algorithmia. By the end of the course, students will be able to integrate serverless computing into their own software solutions, in…

    This course will introduce students to the fundamentals of serverless computing. What does a serverless stack look like, where and how can we use it, what different forms can it take, and how will it reduce our Dev Ops overhead. We'll then progress to actual implementations on specific platforms, including AWS Lambda, Azure functions, Google Cloud Functions, and Algorithmia. By the end of the course, students will be able to integrate serverless computing into their own software solutions, in different environments and platforms.

    See publication
  • Sentiment analysis of tweets by & about political candidates

    Open Seattle

    Politicians tweet to broadcast their ideas, criticize their opponents, and build support. They have near-complete control over what they choose to broadcast (unlike TV & journalism where a third-party is involved).

    Citizens use Twitter as an open forum, often mentioning politicians by handle (username), and politicians have almost zero direct control over citizens’ tweets

    What do politicians choose to talk about? How do people talk about them?

    In this talk, we explore a…

    Politicians tweet to broadcast their ideas, criticize their opponents, and build support. They have near-complete control over what they choose to broadcast (unlike TV & journalism where a third-party is involved).

    Citizens use Twitter as an open forum, often mentioning politicians by handle (username), and politicians have almost zero direct control over citizens’ tweets

    What do politicians choose to talk about? How do people talk about them?

    In this talk, we explore a newly-created tool for analyzing tweets by and about political candidates, using Google App Engine and Algorithmia.

    See publication

Recommendations received

8 people have recommended Jon

Join now to view

More activity by Jon

View Jon’s full profile

  • See who you know in common
  • Get introduced
  • Contact Jon directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Others named Jon Peck in United States