Summary
New Features and Enhancements: 16
Bug Fixes: 25
Feature Flags: 4
Important Upgrade Information
In version 1.4.0, we deprecated support for two Codefresh managed charts:
- 'Codefresh-managed Postgresql' chart has been replaced with Bitnami public chart, 'bitnami/postgresql'.
- 'Codefresh-managed MongoDB' chart has been replaced with Bitnami public chart, 'bitnami/mongodb'.
If you are upgrading to the current version and are using the Codefresh-managed Postgresql and MongoDB charts, you must update the configuration for both charts, both before and after the upgrade. Follow the instructions here: Upgrading to 1.4.0 and higher.
New Features and enhancements
Minimum disk space definition in pipelines
To improve performance, Codefresh reuses disks with cached data across pipeline builds. This means that the disk is not necessarily empty when a build starts, and in rare cases can cause a build failure due to a lack of disk space.
The Set minimum disk space option allows you to proactively plan for this scenario. You can define the minimum disk space for the build, and Codefresh ensures that the build when it starts, gets either a cached disk with sufficient space or a new disk.
You can then the actual disk usage through the Disk Usage chart in Build > Metrics.
The Y-axis shows the maximum disk space, with the red line set at 90% of the disk space. The X-axis charts the duration of the build run. To see the precise disk usage at any point in time, mouse over the dots.
For details, see Set minimum disk space for builds and Viewing pipeline metrics.
View variables used in pipeline builds
The variables used by a specific build of the pipeline vary according to the events that triggered the pipeline. They are injected into the pipelines from different sources and at different levels.
You can now view all the build variables in the same location. Just click the build’s context menu next to Restart, and select Variables. The variables are grouped by levels: Project, Shared Configuration, Trigger and Pipeline. You can also identify if there were overrides and at which level.
For details, see Viewing variables in pipeline builds.
Multiple Helm contexts for pipelines
This version introduces support for multiple Helm registry contexts in the same pipeline.
This means that if the Helm chart has dependencies in any of the imported Helm registry contexts, these are automatically authenticated and added.
For the Helm install and push actions, you can select the primary Helm registry context for the command.
For details, see Import Helm configurations into your pipeline definition and Action modes.
Shallow clone for Git with depth
Our Git-clone step now supports shallow clone. Instead of cloning the entire repository, shallow clone restricts the scope of the clone to the number of commits that you specify.
Add the depth attribute and specify the number of commits to include in the clone.
For details, see Fields for Git-clone.
Increase in concurrency limits for pipelines
We have increased the concurrency limits for pipelines from 14 to 30. Trigger and branch concurrency limits are now set at 31.
For details, see Pipeline Settings - Policies.
Skip triggering pipeline on commit
Override the default behavior to trigger the pipeline on any commit by adding one of these strings anywhere in the commit message.
[skip ci] , [ci skip] , [no ci] , [skip codefresh] , [codefresh skip]
You must include the opening and closing parentheses when adding the string.
New trigger option for PR comments
A new option for triggers from PR comments, the Pull request comment added option, triggers an event when PR comments are made by any user, regardless of their permissions.
Because it is not restricted to owners and collaborators, this option is useful in GitHub, to enable triggers for PR comments made by users in GitHub teams.
To restrict triggers to PR comments made by repository owners or collaborators, use the Pull request comment added (restricted) option.
For details, see Pull Requests from comments.
Ability to restrict cluster access for pipelines
We have a new option that allows you to define specific Kubernetes clusters for a pipeline.
By default, all pipelines in an account can access all Kubernetes clusters integrated with Codefresh. Now you can selectively inject clusters into individual pipelines in the account.
Selective cluster-injection increases security by restricting access to users from different teams. For an account with a large number of clusters, cluster-injection also shortens the initialization phase of the build, reducing the overall build duration.
When cluster-injection is enabled for the account, you can explicitly select the clusters that the pipeline can access. The initialization step in the pipeline displays the clusters selected for it.
For details, see Enabling cluster contexts for pipelines and Pipeline Policies.
Volume reuse across pipelines by project
The reuseVolumeSelector in the runtime environment specifications has a new option. The option both reduces the build duration for pipelines, and allows greater flexibility to optimize caching across multiple pipelines in the same project.
In addition to reusing PVs (Persistent Volumes) by either all pipelines or a single pipeline, you can now reuse PVs across multiple pipelines by projects .
Configuring the project_id shares PVs with all the pipelines in the account assigned to the same project .
For details, see Volume reuse policy.
Download build logs as text
Up to now, you could download log files for builds or build steps in HTML format. We have enhanced log download options to also include text format.
The Output tab for the selected build displays the Download as text option in addition to the Download as HTML.
For details, see Viewing/downloading logs for builds and build steps.
Case-insensitive search for pipeline
Search queries in the Pipeline List and Pipeline views are now case-insensitive.
Account and build contexts
Opening a second tab for Codefresh with a different account displays a message prompting you to either switch to the new account or return to the previous one.
We have implemented a similar mechanism when running a build from a different account. A message prompts you to switch accounts to view the build.
Trigger flag for CLI branch and SHA flags
When using codefresh-run with the –branch or the –sha flags, the –trigger flag with the trigger ID as the value now mandatory.
You can get the trigger ID from the Codefresh UI by selecting the pipeline, then selecting the Triggers tab. Click Edit and get the Name.
Streamlined Feature Flag management
We completed a thorough revamp of the Feature Flag management mechanism for a simpler and enhanced user experience.
At the account level, Codefresh admins can :
- Filter feature flags by state (Enabled, Disabled or Overridden)
- Revert feature flags to their default settings
Also at the account level, system features are now in a different group and disabled by default. When enabled, a warning informs you to consult Customer support.
On the management side, Codefresh admins can:
- Search for Feature Flags
- Filter them by state
- Enable/disable features for a single or multiple accounts accounts
Bug Fixes
- Rolling Back status not updated in UI after successful rollback.
- Pipeline search is broken for users with ABAC.
- Email does not match error during sign-in to Azure with Corporate SSO when already authenticated.
- Failed test-report step deletes artifacts instead of retry.
- get team users command does not return User IDs.
- Repo (Origin Repo and Branch) links from the Builds or Build Details pages to GitHub Enterprise go instead to GitHub.
- LDAP login failure after upgrade to latest release.
- Runner installation fails with CrashLoopBackOff.
- Runner installation fails with “Runtime Error: index out range”.
- Runtime monitor fails to start on upgrading EKS (Elastic Cloud Kubernetes) to 1.21.
- Inconsistent results for Test Connection in ACR integration.
- Empty Select branch dropdown for Bitbucket Server repo.
- Build issues on Windows nodes.
- `shared_host_network: true` results in unreachable Service Containers.
- Pipeline fails with error: Failed to run composition: services; caused by Error: Could not get status for container <container_name>.
- Link to repo branch that includes “/” for BitBucket is broken.
- Trigger creation fails with error: "Trigger description is not allowed to be empty".
- API call to retrieve list of builds fails with 500 error when there are a large number of builds.
- Git-push step to Bitbucket Server returns errors.
- Digest field in Summary tab for image (Images dashboard) displays the ID of the image instead of the digest value.
- Get-disk-state errors triggers Datadog monitoring.
- Cf-api fails to delete PVC (Persistent Volume Claims) for Kubernetes versions >=1.23.
- GET /annotations cause cfapi-endpoints to restart
- Running a build with updated variables, shows the default value in the Variables view while the build runs with the correct values.
- ABAC not working for Git contexts because of undefined authEntity.getActiveAccountId().
Feature Flags
Name |
Description |
Default value |
injectClusterListFromPipelineSettings |
Enables selecting one or more clusters for individual pipelines as part of Pipeline > Policies. The selected pipelines are loaded during initialization. |
FALSE |
abacProject |
Enables defining ABAC permissions for projects. |
FALSE |
cleanGitLockFiles
|
Deletes all lock files from the .git directory at the beginning of a git-clone step. Also adds fallback to fully clone the repository if the existing repo cannot be used. |
FALSE |
multipleHelmRegistryContexts |
Supports the ability to add multiple Helm registry contexts to helm step. Saves helm http basic integration login & password variables with CR_CTX_{CONTEXT_NAME} prefix. |
FALSE |