11/23/2021»»Tuesday

Helm Install Docker Registry

Before you can push or install images, Helm must authenticate to Artifact Registry. If you configured Docker with a credential helper to authenticate with Artifact Registry, you can configure Helm to use your existing Docker registry settings. Otherwise, you can authenticate to the repository with an access token for this quickstart. Forked from the original charts repo. Docker Registry Helm Chart. This repo contains a Kubernetes chart to deploy a private Docker Registry. Helm must be installed to use the charts.

If you run the registry as a container, consider adding the flag -p 443:5000 to the docker run command or using a similar setting in a cloud configuration. You should also set the hosts option to the list of hostnames that are valid for this registry to avoid trying to get certificates for random hostnames due to malicious clients connecting. Step 2: Pull the Docker image and push it to your private Harbor Registry. Next, pull the Docker image of the chart you want to add to your private repository. Then, you need to push it to Harbor to make it available in your project. Follow these steps: Execute the following command to obtain the latest Bitnami Ghost image.

Alfresco Content Services (ACS) is an Enterprise Content Management (ECM) system that’s used for document and case management, project collaboration, web content publishing, and compliant records management. The flexible compute, storage, and database services that Kubernetes offers make it an ideal platform for Content Services. This Helm chart presents an enterprise-grade Content Services configuration that you can adapt to virtually any scenario with the ability to scale up, down or out, depending on your use case.

The Helm chart in this repository supports deploying the Enterprise or Community Edition of Content Services.

The Enterprise configuration deploys the following system:

Considerations

Alfresco provides tested Helm charts as a “deployment template” for customers who want to take advantage of the container orchestration benefits of Kubernetes. These Helm charts are undergoing continual development and improvement, and shouldn’t be used “as is” for your production environments, but should help you save time and effort deploying Content Services for your organization.

The Helm charts in this repository provide a PostgreSQL database in a Docker container and don’t configure any logging. This design was chosen so that you can install them in a Kubernetes cluster without changes, and they’re flexible enough for adopting to your actual environment.

You should use these charts in your environment only as a starting point, and modify them so that Content Services integrates into your infrastructure. You typically want to remove the PostgreSQL container, and connect the cs-repository directly to your database (this might require custom images to get the required JDBC driver in the container).

Another typical change is the integration of your company-wide monitoring and logging tools.

Deployment options

For the best results, we recommend deploying Content Services to AWS EKS.

There are also several Helm examples that show you how to deploy with various configurations:

Customize

To customize the Helm deployment, for example applying AMPs, we recommend following the best practice of creating your own custom Docker image(s). The following customization guidelines walk you through this process.

Any customizations (including major configuration changes) should be done inside the Docker image, resulting in the creation of a new image with a new tag. This approach allows changes to be tracked in the source code (Dockerfile) and rolling updates to the deployment in the Kubernetes cluster.

The Helm chart configuration customization should only include environment-specific changes (for example DB server connection properties) or altered Docker image names and tags. The configuration changes applied via --set will only be reflected in the configuration stored in Kubernetes cluster, a better approach would be to have those in source control i.e. maintain your own values files.

Creating custom Docker images

The Docker Compose customization guidelines provides a detailed example of how to apply an AMP in a custom image. There’s also a more advanced example of building a custom image with configuration.

Using custom Docker images

Once you’ve created your custom image, you can either change the default values in the appropriate values file in the helm/alfresco-content-services folder, or you can override the values via the --set command-line option during the install:

Helm deployment with AWS EKS

This section describes how to deploy Content Services (ACS) Enterprise or Community using Helm onto EKS.

Amazon’s EKS (Elastic Container Service for Kubernetes) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure.

The Enterprise configuration will deploy the following system:

Prerequisites

  • You’ve read the main acs-deploymentproject README prerequisites section
  • You’ve read the main Helm README page
  • You are proficient in AWS and Kubernetes

Note: If you are using Alfresco Transform Service 1.4 or newer, and you want to do IPTC metadata extraction,then you need to bootstrap the IPTC Content Model manuallyinto Content Services.

Set up an EKS cluster

Follow the AWS EKS Getting Started Guide to create a cluster and prepare your local machine to connect to the cluster. Use the Managed nodes - Linux option and specify a --node-type of at least m5.xlarge.

As we’ll be using Helm to deploy the Content Services chart, follow the Using Helm with EKS instructions to set up Helm on your local machine.

Helm also needs to know where to find charts. Run the following commands to add the Nginx ingress and Alfresco repositories to your machine:

Optionally, follow the tutorial to deploy the Kubernetes Dashboard to your cluster. This can be really useful for troubleshooting issues that you may occur.

Prepare the cluster for Content Services

Now we have an EKS cluster up and running, there are a few one time steps we need to perform to prepare the cluster for Content Services to be installed.

DNS

  1. Create a hosted zone in Route53 using these steps if you don’t already have one available.

  2. Create a public certificate for the hosted zone (created in step 1) in Certificate Manager using these steps if you don’t have one already available. Make a note of the certificate ARN for use later.

  3. Create a file called external-dns.yaml with the text below (replace YOUR-DOMAIN-NAME with the domain name you created in step 1). This manifest defines a service account and a cluster role for managing DNS:

  4. Use the kubectl command to deploy the external-dns service.

  5. Find the name of the role used by the nodes by running the following command (replace YOUR-CLUSTER-NAME with the name you gave your cluster):

  6. In the IAM console find the role discovered in the previous step and attach the AmazonRoute53FullAccess managed policy as shown in the screenshot below:

File system

  1. Create an Elastic File System in the VPC created by EKS using these steps ensuring a mount target is created in each subnet. Make a note of the File System ID (circled in the screenshot below):

  2. Find the ID of the VPC created when your cluster was built (replace YOUR-CLUSTER-NAME with the name you gave your cluster):

  3. Find the CIDR range of the VPC (replace VPC-ID with the ID retrieved in the previous step):

  4. Go to the Security Groups section of the VPC Console and search for the VPC using the ID retrieved in step 2, as shown in the screenshot below:

  5. Click on the default security group for the VPC (highlighted in the screenshot above) and add an inbound rule for NFS traffic from the VPC CIDR range as shown in the screenshot below:

  6. Deploy an NFS Client Provisioner with Helm using the following command (replace EFS-DNS-NAME with the string file-system-id.efs.aws-region.amazonaws.com where the file-system-id is the ID retrieved in step 1 and aws-region is the region you’re using, e.g. fs-72f5e4f1.efs.us-east-1.amazonaws.com):

Deploy Content Services

Now the EKS cluster is setup we can deploy Content Services.

Namespace

Namespaces in Kubernetes isolate workloads from each other. Create a namespace to host Content Services inside the cluster using the following command. We’ll then use the alfresco namespace throughout the rest of the tutorial:

Ingress

  1. Create a file called ingress-rbac.yaml with the text below:

  2. Use the kubectl command to create the cluster roles required by the ingress service:

  3. Deploy the ingress (replace ACM_CERTIFICATE_ARN and YOUR-DOMAIN-NAME with the ARN of the certificate and hosted zone created earlier in the DNS section):

    Note: The command will wait until the deployment is ready.

Docker registry secret

Create a docker registry secret to allow the protected images to be pulled from Quay.io by running the following command (replace YOUR-USERNAME and YOUR-PASSWORD with your credentials):

Helm Install Docker Registry

Choose Content Services version

Decide whether you want to install the latest version of Content Services (Enterprise) or a previous version, and follow the steps in the relevant section below.

Latest Enterprise version

Deploy the latest version of Content Services by running the following command (replace YOUR-DOMAIN-NAME with the hosted zone you created earlier):

Note: The command will wait until the deployment is ready.

Previous Enterprise version
  1. Download the version specific values file you require from the helm/alfresco-content-services folder.

  2. Deploy the specific version of Content Services by running the following command (replace YOUR-DOMAIN-NAME with the hosted zone you created earlier, and MAJOR & MINOR with the appropriate values):

    Note: The command will wait until the deployment is ready.

Access

When the deployment is complete, you can access the following URLs. Replace YOUR-DOMAIN-NAME with the hosted zone you created earlier:

  • Repository: https://acs.YOUR-DOMAIN-NAME/alfresco
  • Alfresco Share: https://acs.YOUR-DOMAIN-NAME/share
  • API Explorer: https://acs.YOUR-DOMAIN-NAME/api-explorer

Since you deployed Enterprise, you’ll also have access to:

  • Alfresco Digital Workspace: https://acs.YOUR-DOMAIN-NAME/workspace/
  • Alfresco Sync Service: https://acs.YOUR-DOMAIN-NAME/syncservice/healthcheck

If you’re running Content Services 6.2 (i.e. the latest version) and already have a valid license file for this version, you can apply it directly to the running system. Navigate to the Admin Console and apply your license:

  • https://acs.YOUR-DOMAIN-NAME/alfresco/service/enterprise/admin/admin-license (this only applies for the Enterprise Download Trial)
  • Default username and password is admin
  • See Uploading a new license for more details

Configuration options

By default, this tutorial installs an out-of-the-box setup, however there are many configuration options shown in the table below. There are also several examples covering various use cases.

The following table lists the configurable parameters of the Content Services chart and their default values.

ParameterDescription
repository.adminPasswordAdministrator password for Content Services in md5 hash format. The default value is md5: 209c6174da490caeb422f3fa5a7ae634 (of string admin)
postgresql.enabledEnable the use of the postgres chart in the deployment. The default value is true
postgresql.postgresUserPostgresql database user. The default value is alfresco
postgresql.postgresPasswordPostgresql database password. The default value is alfresco
postgresql.postgresDatabasePostgresql database name. The default value is alfresco
database.externalEnable the use of an externally provisioned database. The default value is false
database.driverExternal database driver (blank by default)
database.userExternal database user (blank by default)
database.passwordExternal database password (blank by default)
database.urlExternal database JDBC URL (blank by default)
alfresco-search.resources.requests.memoryAlfresco Search Services requests memory. The default value is 250Mi
alfresco-search.ingress.enabledEnable external access for Alfresco Search Services. The default value is false
alfresco-search.ingress.basicAuthIf alfresco-search.ingress.enabled is true, you need to provide a base64 encoded htpasswd format user name & password (example: echo -n '$(htpasswd -nbm solradmin somepassword)' where solradmin is username and somepassword is the password). The default value is None
alfresco-search.ingress.whitelist_ipsIf alfresco-search.ingress.enabled is true, you can restrict /solr to a list of IP addresses of CIDR notation. The default value is 0.0.0.0/0
persistence.repository.enabledEnable Volume Persistence on repository. The default value is true
s3connector.enabledSwitch this to true if you have access to the S3 Connector AMP. The default value is false
s3connector.configS3 configuration. Example: s3connector.config.bucketName: myS3Bucket. The default value is {}
s3connector.secretsS3 secrets configuration. Example: s3connector.secrets.accessKey: AJJJJJJJJ. The default value is {}
email.server.enabledEnables the email server. The default value is false
email.server.portSpecifies the port number for the email server. The default value is 1125
email.server.domainSpecifies the name or the IP address of the network to bind the email server to.
email.server.enableTLSSTARTTLS is an extension to plain text communication protocols. The default value is true
email.server.hideTLSSTARTTLS is an extension to plain text communication protocols. The default value is false
email.server.requireTLSSTARTTLS is an extension to plain text communication protocols. The default value is false
email.server.auth.enabledAuthentication is turned on by setting the following property. The default value is true
email.server.connections.maxThe maximum number of connections allowed. Increase this number to favour the email subsystem at the expense of the rest of alfresco. The default value is 3
email.server.allowed.sendersProvides a comma-separated list of email REGEX patterns of allowed senders.
email.server.blocked.sendersProvides a comma-separated list of email REGEX patterns of blocked senders.
email.inbound.enabledEnable/Disable the inbound email service. The default value is false
email.inbound.unknownUserThe username to authenticate with when the sender address is not recognized in alfresco. The default value is anonymous
email.inbound.emailContributorsAuthorityAllow the email contributors to belong to an authority.
email.handler.folder.overwriteDuplicatesShould duplicate messages to a folder overwrite each other or be named with a (number). The default value is true
mail.encodingSpecifies UTF-8 encoding for email. The default value is UTF-8
mail.hostSpecifies the host name of the SMTP host, that is, the host name or IP address of the server to which email should be sent.
mail.portSpecifies the port number on which the SMTP service runs (the default is 25). The default value is 25
mail.protocolSpecifies which protocol to use for sending email. The default value is smtps
mail.usernameSpecifies the user name of the account that connects to the smtp server.
mail.passwordSpecifies the password for the user name used in mail.username.
mail.from.defaultSpecifies the email address from which email notifications are sent.
mail.from.enabledIf this property is set to false, then the value set in mail.from.default is always used.
mail.smtp.authSpecifies if authentication is required or not. The default value is true
mail.smtp.debugSpecifies if debugging SMTP is required or not. The default value is false
mail.smtp.starttls.enableSpecifies if the transport layer security (TLS) needs to be enabled or not. The default value is true
mail.smtp.timeoutSpecifies the timeout in milliseconds for SMTP. The default value is 20000
mail.smtps.authSpecifies if authentication for SMTPS is required or not. The default value is true
mail.smtps.starttls.enableSpecifies if the transport layer security for smtps needs to be enabled or not. The default value is true
imap.server.enabledEnables or disables the IMAP subsystem. The default value is false
imap.server.portIMAP has a reserved port number of 143. The default value is 1143
imap.server.hostReplace this value with the IP address (or corresponding DNS name) of your external IP interface. The default value is 0.0.0.0
imap.server.imap.enabledEnables or disables the IMAP subsystem. The default value is true
imap.server.imaps.enabledEnables or disables the IMAP subsystem. The default value is true
imap.server.imaps.portIMAP has a reserved port number of 143. The default value is 1144
imap.mail.from.defaultConfiguring the email from field default for the client with IMAP.
imap.mail.to.defaultConfiguring the email to field default for the client with IMAP.

This deployment is also not fully secured by default. To learn about and apply further restrictions including pod security, network policies etc., see the EKS Best Practices for Security.

Troubleshooting

Here’s some help for diagnosing and resolving any issues you may encounter.

Kubernetes dashboard

The easiest way to troubleshoot issues on a Kubernetes deployment is to use the dashboard. Assuming you’ve deployed the dashboard in the cluster, you can use the following steps to explore your deployment:

  1. Retrieve the service account token with the following command:

  2. Run the kubectl proxy:

  3. Open a browser and navigate to: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

  4. Select Token, enter the token retrieved in step 1, and click Sign in.

  5. Select alfresco from the Namespace menu, click Pods, and then the pod name.

    To view the logs, press the Menu icon in the toolbar as highlighted below:

Port-forwarding to a pod

This approach allows you to connect to a specific application in the cluster.See the Kubernetes documentation for details.

You can access any component of the deployment that’s not exposed via ingress rules in this way, for example Alfresco Search Services, DB or individual transformers.

View log files via command-line

Helm Install Stable/docker-registry

You can view log files for individual pods from the command-line using the kubectl utility.

Retrieve the list of pods in the alfresco namespace:

Retrieve the logs for a pod using the following command (replace the pod name accordingly):

To continually follow the log file for a pod, use the -f option:

Change log levels

You can change the log levels for the specific Java packages in the content-repository via the Admin Console. Use the following URL to access it: https://<host>/alfresco/service/enterprise/admin/admin-log-settings

Note: Changes are only applied to the content-repository node from which the Admin Console is launched.

  • You can change the log levels by modifying log4j.properties in the content-repository image and doing a rolling update to the deployment. In this case the settings will be applied system-wide. See the customization guidelines for more.
  • The Content Services deployment doesn’t include any log aggregation tools. The logs generated by pods will be lost once the pods are terminated.

JMX dump

This tool allows you to download a ZIP file containing information useful for troubleshooting and supporting your system. Issue a GET request (Admin only) to: https://<host>/alfresco/service/api/admin/jmxdump.

Cleanup

  1. Remove the acs and acs-ingress deployments:

  2. Delete the Kubernetes namespace:

  3. Go to the EFS Console, select the file system we created earlier, and press the “Delete” button to remove the mount targets and file system.

  4. Go to the IAM console and remove the AmazonRoute53FullAccess managed policy we added to the NodeInstanceRole in the File System section otherwise the cluster will fail to delete in the next step.

  5. Finally, delete the EKS cluster (replace YOUR-CLUSTER-NAME with the name you gave your cluster):

-->

To quickly manage and deploy applications for Kubernetes, you can use the open-source Helm package manager. With Helm, application packages are defined as charts, which are collected and stored in a Helm chart repository.

This article shows you how to host Helm charts repositories in an Azure container registry, using Helm 3 commands and storing charts as OCI artifacts. In many scenarios, you would build and upload your own charts for the applications you develop. For more information on how to build your own Helm charts, see the Chart Template Developer's Guide. You can also store an existing Helm chart from another Helm repo.

Helm 3 or Helm 2?

To store, manage, and install Helm charts, you use commands in the Helm CLI. Major Helm releases include Helm 3 and Helm 2. For details on the version differences, see the version FAQ.

Helm 3 should be used to host Helm charts in Azure Container Registry. With Helm 3, you:

  • Can store and manage Helm charts in repositories in an Azure container registry
  • Store Helm charts in your registry as OCI artifacts. Azure Container Registry provides GA support for OCI artifacts, including Helm charts.
  • Authenticate with your registry using the helm registry login or az acr login command.
  • Use helm chart commands to push, pull, and manage Helm charts in a registry
  • Use helm install to install charts to a Kubernetes cluster from a local repository cache.

Feature support

Azure Container Registry supports specific Helm chart management features depending on whether you are using Helm 3 (current) or Helm 2 (deprecated).

FeatureHelm 2Helm 3
Manage charts using az acr helm commands✔️
Store charts as OCI artifacts✔️
Manage charts using az acr repository commands and the Repositories blade in Azure portal✔️

Note

As of Helm 3, az acr helm commands for use with the Helm 2 client are being deprecated. A minimum of 3 months' notice will be provided in advance of command removal.

Install

Chart version compatibility

The following Helm chart versions can be stored in Azure Container Registry and are installable by the Helm 2 and Helm 3 clients.

Download
VersionHelm 2Helm 3
apiVersion v1✔️✔️
apiVersion v2✔️

Migrate from Helm 2 to Helm 3

If you've previously stored and deployed charts using Helm 2 and Azure Container Registry, we recommend migrating to Helm 3. See:

  • Migrating Helm 2 to 3 in the Helm documentation.
  • Migrate your registry to store Helm OCI artifacts, later in this article

Prerequisites

The following resources are needed for the scenario in this article:

  • An Azure container registry in your Azure subscription. If needed, create a registry using the Azure portal or the Azure CLI.
  • Helm client version 3.1.0 or later - Run helm version to find your current version. For more information on how to install and upgrade Helm, see Installing Helm.
  • A Kubernetes cluster where you will install a Helm chart. If needed, create an Azure Kubernetes Service cluster.
  • Azure CLI version 2.0.71 or later - Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI.

Enable OCI support

Use the helm version command to verify that you have installed Helm 3:

Set the following environment variable to enable OCI support in the Helm 3 client. Currently, this support is experimental and subject to change.

Create a sample chart

Create a test chart using the following commands:

As a basic example, change directory to the templates folder and first delete the contents there:

In the templates folder, create a file called configmap.yaml, by running the following command:

For more about creating and running this example, see Getting Started in the Helm Docs.

Save chart to local registry cache

Helm Install Docker Registry Download

Change directory to the hello-world subdirectory. Then, run helm chart save to save a copy of the chart locally and also create an alias with the fully qualified name of the registry (all lowercase) and the target repository and tag.

In the following example, the registry name is mycontainerregistry, the target repo is helm/hello-world, and the target chart tag is 0.1.0. To successfully pull dependencies, the target chart image name and tag must match the name and version in Chart.yaml.

Run helm chart list to confirm you saved the charts in the local registry cache. Output is similar to:

Authenticate with the registry

Run helm registry login to authenticate with the registry. You may pass registry credentials appropriate for your scenario, such as service principal credentials, or a repository-scoped token.

For example, create an Azure Active Directory service principal with pull and push permissions (AcrPush role) to the registry. Then supply the service principal credentials to helm registry login. The following example supplies the password using an environment variable:

Tip

You can also login to the registry with your individual Azure AD identity to push and pull Helm charts.

Push chart to registry

Run the helm chart push command in the Helm 3 CLI to push the chart to the fully qualified target repository:

After a successful push, output is similar to:

List charts in the repository

As with images stored in an Azure container registry, you can use az acr repository commands to show the repositories hosting your charts, and chart tags and manifests.

For example, run az acr repository show to see the properties of the repo you created in the previous step:

Output is similar to:

Windows

Run the az acr repository show-manifests command to see details of the chart stored in the repository. For example:

Output, abbreviated in this example, shows a configMediaType of application/vnd.cncf.helm.config.v1+json:

Pull chart to local cache

To install a Helm chart to Kubernetes, the chart must be in the local cache. In this example, first run helm chart remove to remove the existing local chart named mycontainerregistry.azurecr.io/helm/hello-world:0.1.0:

Run helm chart pull to download the chart from the Azure container registry to your local cache:

Export Helm chart

To work further with the chart, export it to a local directory using helm chart export. For example, export the chart you pulled to the install directory:

To view information for the exported chart in the repo, run the helm show chart command in the directory where you exported the chart.

Helm returns detailed information about the latest version of your chart, as shown in the following sample output:

Install Helm chart

Run helm install to install the Helm chart you pulled to the local cache and exported. Specify a release name such as myhelmtest, or pass the --generate-name parameter. For example:

Output after successful chart installation is similar to:

To verify the installation, run the helm get manifest command.

The command returns the YAML data in your configmap.yaml template file.

Run helm uninstall to uninstall the chart release on your cluster:

Delete chart from the registry

To delete a chart from the container registry, use the az acr repository delete command. Run the following command and confirm the operation when prompted:

Helm Install Docker Registry Windows 10

Migrate your registry to store Helm OCI artifacts

If you previously set up your Azure container registry as a chart repository using Helm 2 and the az acr helm commands, we recommend that you upgrade to the Helm 3 client. Then, follow these steps to store the charts as OCI artifacts in your registry.

Important

  • After you complete migration from a Helm 2-style (index.yaml-based) chart repository to OCI artifact repositories, use the Helm CLI and az acr repository commands to manage the charts. See previous sections in this article.
  • The Helm OCI artifact repositories are not discoverable using Helm commands such as helm search and helm repo list. For more information about Helm commands used to store charts as OCI artifacts, see the Helm documentation.

Enable OCI support

Ensure that you are using the Helm 3 client:

Enable OCI support in the Helm 3 client. Currently, this support is experimental and subject to change.

List current charts

List the charts currently stored in the registry, here named myregistry:

Output shows the charts and chart versions:

Save charts as OCI artifacts

Helm Install Docker Registry

For each chart in the repo, pull the chart locally, and save it as an OCI artifact. Example:

Push charts to registry

Helm install stable/docker-registry

Login to the registry:

Push each chart to the registry:

After pushing a chart, confirm it is stored in the registry:

After pushing all of the charts, optionally remove the Helm 2-style chart repository from the registry. Doing so reduces storage in your registry:

Next steps

  • For more information on how to create and deploy Helm charts, see Developing Helm charts.
  • Learn more about installing applications with Helm in Azure Kubernetes Service (AKS).
  • Helm charts can be used as part of the container build process. For more information, see Use Azure Container Registry Tasks.

Most Viewed Posts