As the technology landscape continues to evolve, the latest release of the Red Hat Ansible Certified Content Collection for amazon.aws introduces a suite of powerful modules that redefine the boundaries of automation within Amazon Web Services (AWS) while redefining how organizations approach security deployments and seamless migrations within the AWS ecosystem.
Read the highlights from the Flink 1.19 release, including standard YAML support for configurations, dynamic source parallelism inference, and SQL and Table API improvements.
As AI continues to shape the development landscape, developers are navigating a new frontier—not one that will make their careers obsolete, but one that will require their skills and instincts more than ever.
Sure, AI is revolutionizing software development, but that revolution ultimately starts and stops with developers. That’s because these tools need to have a pilot in control. While they can improve the time to code and ship, they can’t serve as a replacement for human oversight and coding abilities.
We recently conducted research into the evolving relationship between developers and AI tools and found that AI has the potential to alleviate the cognitive burden of complex tasks for developers. Instead of being used solely as a second pair of hands, AI tools can also be used more like a second brain, helping developers be more well-rounded and efficient.
In essence, AI can reduce mental strain so that developers can focus on anything from learning a new language to creating high-quality solutions for complex problems. So, if you’re sitting here wondering if you should learn how to code or how AI fits into your current coding career, we’re here to tell you what you need to know about your work in the age of AI.
A brief history of AI-powered techniques and tools
While the media buzz around generative AI is relatively new, AI coding tools have been around —in some form or another—much longer than you might expect. To get you up to speed, here’s a brief timeline of the AI-powered tools and techniques that have paved the way for the sophisticated coding tools we have today:
1950s:Autocoder was one of the earliest attempts at automatic coding. Developed in the 1950s by IBM, Autocoder translated symbolic language into machine code, streamlining programming tasks for early computers.
1958:LISP, one of the oldest high-level programming languages created by John McCarthy, introduced symbolic processing and recursive functions, laying the groundwork for AI programming. Its flexibility and expressive power made it a popular choice for AI research and development.
(defun factorial (n)
(if (<= n 1)
1
(* n (factorial (- n 1)))))
This function calculates the factorial of a non-negative integer ‘n’ in LISP. If ‘n’ is 0 or 1, the factorial is 1. Otherwise, it recursively multiplies ‘n’ by the factorial of n-1 until ‘n’ reaches 1.
1970:SHRDLU, developed by Terry Winograd at MIT, was an early natural language understanding program that could interpret and respond to commands in a restricted subset of English, and demonstrated the potential for AI to understand and generate human language.
SHRDLU, operating in a block world, aimed to understand and execute natural language instructions for manipulating virtual objects made of various shaped blocks. [Source: Cryptlabs]
1980s: In the 1980s, code generators, such as The Last One, emerged as tools that could automatically generate code based on user specifications or predefined templates. While not strictly AI-powered in the modern sense, they laid the foundation for later advancements in code generation and automation.
“Personal Computer” magazine cover from 1982 that explored the program, The Last One. [Source: David Tebbutts]
1990s:Neural network–based predictive models were increasingly applied to code-related tasks, such as predicting program behavior, detecting software defects, and analyzing code quality. These models leveraged the pattern recognition capabilities of neural networks to learn from code examples and make predictions.
2000s:Refactoring tools with AI capabilities began to emerge in the 2000s, offering automated assistance for restructuring and improving code without changing its external behavior. These tools used AI techniques to analyze code patterns, identify opportunities for refactoring, and suggest appropriate refactorings to developers.
These early AI-powered coding tools helped shape the evolution of software development and set the stage for today’s AI-driven coding assistance and automation tools, which continue to evolve seemingly every day.
Evolving beyond the IDE
Initially, AI tools were primarily confined to the integrated development environment (IDE), aiding developers in writing and refining code. But now, we’re starting to see AI touch every part of the software development lifecycle (SDLC), which we’ve found can increase productivity, streamline collaboration, and accelerate innovation for engineering teams.
In a 2023 survey of 500 U.S.-based developers, 70% reported experiencing significant advantages in their work, while over 80% said these tools will foster greater collaboration within their teams. Additionally, our research revealed that developers, on average, complete tasks up to 55% faster when using AI coding tools.
Here’s a quick look at where modern AI-powered coding tools are and some of the technical benefits they provide today:
Code completion and suggestions. Tools like GitHub Copilot use large language models (LLMs) to analyze code context and generate suggestions to make coding more efficient. Developers can now experience a notable boost in productivity as AI can suggest entire lines of code based on the context and patterns learned from developers’ code repositories, rather than just the code in the editor. Copilot also leverages the vast amount of open-source code available on GitHub to enhance its understanding of various programming languages, frameworks, and libraries, to provide developers with valuable code suggestions.
Generative AI in your repositories. Developers can use tools like GitHub Copilot Chat to ask questions and gain a deeper understanding of their code base in real time. With AI gathering context of legacy code and processes within your repositories, GitHub Copilot Enterprise can help maintain consistency and best practices across an organization’s codebase when suggesting solutions.
Natural language processing (NLP). AI has recently made great strides in understanding and generating code from natural language prompts. Think of tools like ChatGPT where developers can describe their intent in plain language, and the AI produces valuable outputs, such as executable code or explanations for that code functionality.
Enhanced debugging with AI. These tools can analyze code for potential errors, offering possible fixes by leveraging historical data and patterns to identify and address bugs more effectively.
To implement AI tools, developers need technical skills and soft skills
There are two different subsets of skills that can help developers as they begin to incorporate AI tools into their development workflows: technical skills and soft skills. Having both technical chops and people skills is super important for developers when they’re diving into AI projects—they need to know their technical skills to make those AI tools work to their advantage, but they also need to be able to work well with others, solve problems creatively, and understand the big picture to make sure the solutions they come up with actually hit the mark for the folks using them.
Let’s take a look at those technical skills first.
Getting technical
Prompt engineering
Prompt engineering involves crafting well-designed prompts or instructions that guide the behavior of AI models to produce desired outputs or responses. It can be pretty frustrating when AI-powered coding assistants don’t generate a valuable output, but that can often be quickly remedied by adjusting how you communicate with the AI. Here are some things to keep in mind when crafting natural language prompts:
Be clear and specific. Craft direct and contextually relevant prompts to guide AI models more effectively.
Experiment and iterate. Try out various prompt variations and iterate based on the outputs you receive.
Validate, validate, validate. Similar to how you would inspect code written by a colleague, it’s crucial to consistently evaluate, analyze, and verify code generated by AI algorithms.
Code reviews
AI is helpful, but it isn’t perfect. While LLMs are trained on large amounts of data, they don’t inherently understand programming concepts the way humans do. As a result, the code they generate may contain syntax errors, logic flaws, or other issues. That’s why developers need to rely on their coding competence and organizational knowledge to make sure that they aren’t pushing faulty code into production.
For a successful code review, you can start out by asking: does this code change accomplish what it is supposed to do? From there, you can take a look at this in-depth checklist of more things to keep in mind when reviewing AI-generated code suggestions.
Testing and security
With AI’s capabilities, developers can now generate and automate tests with ease, making their testing responsibilities less manual and more strategic. To ensure that the AI-generated tests cover critical functionality, edge cases, and potential vulnerabilities effectively, developers will need a strong foundational knowledge of programming skills, testing principles, and security best practices. This way, they’ll be able to interpret and analyze the generated tests effectively, identify potential limitations or biases in the generated tests, and augment with manual tests as necessary.
Here’s a few steps you can take to assess the quality and reliability of AI-generated tests:
Verify test assertions. Check if the assertions made by the AI-generated tests are verifiable and if they align with the expected behavior of the software.
Assess test completeness. Evaluate if the AI-generated tests cover all relevant scenarios and edge cases and identify any gaps or areas where additional testing may be required to achieve full coverage.
Identify limitations and biases. Consider factors such as data bias, algorithmic biases, and limitations of the AI model used for test generation.
Evaluate results. Investigate any test failures or anomalies to determine their root causes and implications for the software.
For those beginning their coding journey, check out the GitHub Learning Pathways to gain deeper insights into testing strategies and security best practices with GitHub Actions and GitHub Advanced Security.
You can also bolster your security skills with this new, open source Secure Code Game 🎮.
And now, the soft skills
As developers leverage AI to build what’s next, having soft skills—like the ability to communicate and collaborate well with colleagues—is becoming more important than ever.
Let’s take a more in-depth look at some soft skills that developers can focus on as they continue to adopt AI tools:
Communication. Communication skills are paramount to collaborating with team members and stakeholders to define project requirements, share insights, and address challenges. They’re also important as developers navigate prompt engineering. The best AI prompts are clear, direct, and well thought out—and communicating with fellow humans in the workplace isn’t much different.
Did you know that prompt engineering best practices just might help you build your communication skills with colleagues? Check out this thought piece from Harvard Business Review for more insights.
Problem solving. Developers may encounter complex challenges or unexpected issues when working with AI tools, and the ability to think creatively and adapt to changing circumstances is crucial for finding innovative solutions.
Adaptability. The rapid advancement of AI technology requires developers to be adaptable and willing to embrace new tools, methodologies, and frameworks. Plus, cultivating soft skills that promote a growth mindset allows individuals to consistently learn and stay updated as AI tools continue to evolve.
Ethical thinking. Ethical considerations are important in AI development, particularly regarding issues such as bias, fairness, transparency, and privacy. Integrity and ethical reasoning are essential for making responsible decisions that prioritize the well-being of users and society at large.
Empathy. Developers are often creating solutions and products for end users, and to create valuable user experiences, developers need to be able to really understand the user’s needs and preferences. While AI can help developers create these solutions faster, through things like code generation or suggestions, developers still need to be able to QA the code and ensure that these solutions still prioritize the well-being of diverse user groups.
Sharpening these soft skills can ultimately augment a developer’s technical expertise, as well as enable them to work more effectively with both their colleagues and AI tools.
Take this with you
As AI continues to evolve, it’s not just changing the landscape of software development; it’s also poised to revolutionize how developers learn and write code. AI isn’t replacing developers—it’s complementing their work, all while providing them with the opportunity to focus more on coding and building their skill sets, both technical and interpersonal.
If you’re interested in improving your skills along your AI-powered coding journey, check out these repositories to start building your own AI based projects. Or you can test out GitHub Copilot, which can help you learn new programming languages, provide coding suggestions, and ask important coding questions right in your terminal.
The previous in-person Jenkins Contributor Summit took place in 2020, just prior to the lockdowns and precautions that would change the world.
Thankfully, on February 2, 2024, just prior to this year’s FOSDEM conference, we were able to gather again, in Brussels, so that we could have an in-person Jenkins Contributor Summit.
The return to in-person meant that not only could we gather in one location, but we were also able to collaborate and work together directly, something that is not normally possible due to the global spread of the Jenkins community.
To make things even better, all of the Jenkins Officers and four of the five board members were able to travel to Brussels for the summit!
Massive thanks to Betacowork for providing a space that could hold the Contributor Summit.
If you want to follow along, we are including a link to the Contributor Summit slide deck to view the presentation at any time.
The day started with Jean-Marc Meessen providing an overview of the agenda.
Mark Waite then provided a review of the current state of Jenkins as a project.
This covered everything from user and maintainer statistics to what the future of Jenkins will hopefully look like.
Following Mark, the Jenkins SIG leaders and Officers provided insights into their various areas of knowledge.
Damien Duportal, the Infrastructure Officer, presented first, reviewing how Jenkins Infrastructure has evolved over the last year and what we are looking forward to in 2024.
Next up was Tim Jacomb, the Release Officer, shared the successes and innovation that the project has experienced throughout 2023.
After Tim wrapped up his section of the presentation, Kevin Martens, the Documentation Officer, shared what we hope 2024 will look like for Jenkins documentation and Jenkins.io.
As the Advocacy & Outreach SIG leader, Alyssa Tong then recapped all of the events that Jenkins held or participated in during 2023.
She also shared the exciting news that Jenkins has recently been won the Most Innovative DevOps Open Source Project award from DevOps Dozen
Following Alyssa, Wadeck Follonier, the Security Officer, reviewed the successes that the Jenkins Security team had over the last year.
Wadeck also outlined tooling additions and changes to Jenkins that will help determine vulnerabilities and issues.
Tim Jacomb then took the stage once again to provide insights on the user experience of Jenkins.
He highlighted items such as the Plugin Manager improvements, UI modernization, and the work that Jan Faracik has contributed such as removing the Yahoo UI, among other things.
After taking some time to break for lunch, we returned to the contributor summit to hear Vincent Latombe sharing what was done in Jenkins to support High Availability/Horizontal Scalability for CloudBees.
After Vincent finished, Oleg Nenashev provided an update and shared what the roadmap looks like for Jenkinsfile Runner.
After Oleg wrapped up, Bruno Verachten provided insights and review from the Platform SIG.
Once Bruno finished his presentation, Alexander Brandes and Damien Duportal shared and discussed the idea (and potential challenges) of removing Blue Ocean from the Jenkins base distribution.
This is a topic that will continue to be discussed for the foreseeable future, until a reasonable solution and replacement can be decided upon.
After all of the presentations were finished, Basil Crow provided an overview and demo for Searching for API usage in plugins.
The presentation itself reviewed what the API usage might include, why it is helpful to perform this search, why migrations should be managed, and why empathy is a core value when it comes to development and engineering.
The Contributor Summit then concluded with a two hour group coding session, where attendees were encouraged to work with other members of the summit to work on any of the topics that were discussed prior.
This provided an opportunity for people to work directly with one another, which would otherwise be impossible due to how far the Jenkins community stretches.
Work that would typically be done asynchronously was instead immediately possible thanks to the proximity of the contributors.
Now, with the Contributor Summit wrapped up, we shifted focus to FOSDEM and the rest of the weekend.
This year’s FOSDEM conference was as busy as ever!
The Jenkins booth saw tons of visitors over the two days, and we even sold out most of our t-shirts!
Bruno once again brought miniJen and a whole new Kubernetes (Roundernetes) set-up to help draw visitors in and have conversations around what Jenkins is capable of.
Over the course of FOSDEM, we received hundreds of visitors, evident by the lack of stickers that we brought home.
There was very little downtime at the Jenkins stand, with visitors constantly coming by with questions about Jenkins present and future.
Overall, the Contributor Summit and FOSDEM were both wildly successful for the Jenkins community, proving again how important these events are.
Thanks and gratitude
We want to share our deep appreciation to Betacowork for providing a room for the Jenkins contributor summit.
The room was more than enough for all of the contributors to gather and share in the summit, in addition to providing great space for the group coding session.
Thanks to Jean-Marc Meessen for connecting with Betacowork to secure the room for this year’s summit.
We also want to thank FOSDEM for once again allowing Jenkins to be part of the event.
It was a wonderful experience to attend the conference and share Jenkins with the open-source community.
We would also like to thank both CloudBees and the Continuous Delivery Foundation for donating the shirts, socks, and stickers that occupied our booth for the weekend.
Q: Do you have an idea of where the best tradeoff is for high IO speed cost and GPU working cost? Is it always best to spend maximum and get highest IO speed possible? A: It depends on what you are trying to do If you are training a Large Language Model (LLM) then you’ll […]