{"pageContext":{"isCreatedByStatefulCreatePages":false,"url":"/posts/automating-freeipa-with-terraform-43b7/","relativePath":"posts/automating-freeipa-with-terraform-43b7.md","relativeDir":"posts","base":"automating-freeipa-with-terraform-43b7.md","name":"automating-freeipa-with-terraform-43b7","frontmatter":{"title":"Automating FreeIPA with Terraform","stackbit_url_path":"posts/automating-freeipa-with-terraform-43b7","date":"2020-04-30T10:48:36.114Z","excerpt":"The FreeIPA Terraform provider allows to automate creation and management of FreeIPA resources.","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--EFbU1JTl--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://www.camptocamp.com/wp-content/uploads/xterraform_bandeau.png.pagespeed.ic.neAGqH-_lX.webp","comments_count":0,"positive_reactions_count":0,"tags":["terraform","devops","freeipa","opensource"],"canonical_url":"https://www.camptocamp.com/actualite/automating-freeipa-with-terraform/","template":"post"},"html":"<p><a href=\"https://www.terraform.io/\">Terraform</a> is great for cloud provisioning and has now become a standard tool to deploy infrastructures as code, in a DevOps fashion.</p>\n<p><a href=\"https://www.terraform.io/docs/providers/index.html\">Many plugins</a> exist to cover specific needs, from major cloud providers (AWS, GCP, Azure, etc.) to specific app APIs (Grafana, GitHub, or even PostgreSQL). The community provides and maintains <a href=\"https://www.terraform.io/docs/providers/type/community-index.html\">additional providers</a> which can be installed and used in any Terraform project as plugins.</p>\n<p>Camptocamp developed several providers over the last few years. Besides\nthe [official Rancher provider] (<a href=\"https://www.terraform.io/docs/providers/rancher/index.html\">https://www.terraform.io/docs/providers/rancher/index.html</a>) which was co-developed by our team and contributed to the community, we maintain providers to integrate Terraform with the <a href=\"https://github.com/camptocamp/terraform-provider-puppetca\">PuppetCA</a>, the <a href=\"https://github.com/camptocamp/terraform-provider-puppetdb\">PuppetDB</a>, as well as the <a href=\"https://github.com/camptocamp/terraform-provider-pass\">gopass password vault</a>.</p>\n<p>More recently, we started having a need to automate FreeIPA resources using Terraform, so we started <a href=\"https://github.com/camptocamp/terraform-provider-freeipa\">a new provider</a>.</p>\n<h1>Installing</h1>\n<p>Installing additional Terraform providers is <a href=\"https://www.terraform.io/docs/configuration/providers.html#%20third-party-plugins\">rather straightforward</a>.\nYou can simply download the binary from the <a href=\"https://github.com/camptocamp/terraform-provider-freeipa/releases\">releases page</a> and\ndrop it in your\n<code>~/.terraform.d/plugins</code>\ndirectory.</p>\n<h1>Usage</h1>\n<p>Like all other Terraform providers, you first need to configure the provider. You can do that using either hardcoded parameters or environment variables. In this second case, we strongly encourage you to make use of <a href=\"https://github.com/cyberark/summon\">summon</a> as a wrapper to dynamically expose the environment variables at call time.</p>\n<deckgo-highlight-code hcl   highlight-lines=\"\">\n          <code slot=\"code\">provider freeipa {\n  host = &quot;ipa.example.test&quot; # or set $FREEIPA_HOST\n  username = &quot;admin&quot; # or set $FREEIPA_USERNAME\n  password = &quot;P@S5sw0rd&quot; # or set $FREEIPA_PASSWORD\n  insecure = true\n}</code>\n        </deckgo-highlight-code>\n<p>Next, you can start writing resources to manage FreeIPA hosts and DNS  records:</p>\n<deckgo-highlight-code hcl   highlight-lines=\"\">\n          <code slot=\"code\">resource freeipa_host &quot;foo&quot; {\n  fqdn = &quot;foo.example.test&quot;\n  description = &quot;This is my foo host&quot;\n  force = true\n  random = true\n  userpassword = &quot;abcde&quot;\n}\n\nresource freeipa_dns_record &quot;bar&quot; {\n  idnsname = &quot;bar&quot;\n  dnszoneidnsname = &quot;myzone&quot;\n  dnsttl = 20\n  records = [&quot;1.2.3.4&quot;]\n}</code>\n        </deckgo-highlight-code>\n<p>At the moment, this FreeIPA provider only features 2 resource types, to manage FreeIPA hosts and DNS records. Don't hesitate to <a href=\"https://github.com/camptocamp/terraform-provider-freeipa\">contribute to it</a> by providing more resource types!</p>\n<p><em>This post was originally published on <a href=\"https://www.camptocamp.com/actualite/automating-freeipa-with-terraform/\">https://www.camptocamp.com/actualite/automating-freeipa-with-terraform/</a></em></p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/automating-freeipa-with-terraform-43b7\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    ","pages":[{"url":"/contact/","relativePath":"contact.md","relativeDir":"","base":"contact.md","name":"contact","frontmatter":{"title":"Get in Touch","img_path":"images/contact.jpg","form_id":"contactForm","form_fields":[{"type":"text","name":"name","label":"Name","default_value":"Your name","is_required":true},{"type":"email","name":"email","label":"Email","default_value":"Your email address","is_required":true},{"type":"select","name":"subject","label":"Subject","default_value":"Please select","options":["Error on the site","Sponsorship","Other"]},{"type":"textarea","name":"message","label":"Message","default_value":"Your message"},{"type":"checkbox","name":"consent","label":"I understand that this form is storing my submitted information so I can be contacted."}],"submit_label":"Send Message","template":"contact"},"html":"<p>To get in touch fill the form below.</p>"},{"url":"/","relativePath":"index.md","relativeDir":"","base":"index.md","name":"index","frontmatter":{"title":"Home","template":"home"},"html":""},{"url":"/posts/a-simple-auth-proxy-for-eks-24dh/","relativePath":"posts/a-simple-auth-proxy-for-eks-24dh.md","relativeDir":"posts","base":"a-simple-auth-proxy-for-eks-24dh.md","name":"a-simple-auth-proxy-for-eks-24dh","frontmatter":{"title":"A Simple Auth Proxy for EKS","stackbit_url_path":"posts/a-simple-auth-proxy-for-eks-24dh","date":"2020-11-11T16:10:48.322Z","excerpt":"How to easily give access to an EKS cluster using an authentication proxy with a PSK","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--bJ8rTXta--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/49dhgxfjbeqgo2kxo5sr.jpeg","comments_count":0,"positive_reactions_count":8,"tags":["aws","kubernetes","devops","opensource"],"canonical_url":"https://dev.to/camptocamp-ops/a-simple-auth-proxy-for-eks-24dh","template":"post"},"html":"<p><a href=\"https://aws.amazon.com/eks/\">AWS EKS</a> is a great option for a hosted Kubernetes cluster.</p>\n<p>It is in particular easy to use for demos and training sessions.</p>\n<p>However, EKS authentication is based off AWS IAM, which means users need an AWS account. Authenticating to EKS typically involves calling the\n<code>aws eks get-token</code>\ncommand in your\n<code>.kube/config</code>\nso as to retrieve an authentication token.</p>\n<p>As we were setting up EKS for Kubernetes training, we needed a simple way for users without an AWS account to access the cluster, so we created a basic proxy service for the EKS\n<code>get-token</code>\naction.</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/github?args=camptocamp%2Faws-iam-authenticator-proxy\" style=\"border: 0; width: 100%;\"></iframe>\n<h2>Deploying with Docker</h2>\n<p>The proxy can be deployed using Docker, with AWS credentials, e.g.:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">docker run --rm -p 8080:8080 \\\n             -e AWS_ACCESS_KEY_ID=&lt;AWS_ACCESS_KEY_ID&gt; \\\n             -e AWS_SECRET_ACCESS_KEY=&lt;AWS_SECRET_ACCESS_KEY&gt; \\\n             -e EKS_CLUSTER_ID=&lt;EKS_CLUSTER_ID&gt; \\\n             -e PSK=&quot;mysecretstring&quot; \\\n    camptocamp/aws-iam-authenticator-proxy:latest</code>\n        </deckgo-highlight-code>\n<p>The rights on the cluster will depend on the user you chose to create the access key.</p>\n<p>The PSK is optional, and allows to secure the proxy a little bit.</p>\n<p>Once the proxy is started, you can access it at <a href=\"http://localhost:8080?psk=mysecretstring\">http://localhost:8080?psk=mysecretstring</a>, so you can simply set your\n<code>~/.kube/config</code>\nto use\n<code>curl</code>\ninstead of\n<code>aws</code>\n:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">users:\n- name: &lt;cluster_name&gt;\n  user:\n    exec:\n      apiVersion: client.authentication.k8s.io/v1alpha1\n      command: curl\n      args:\n        - -s\n        - &quot;http://&lt;your_ip&gt;:8080/?psk=mysecretstring&quot;</code>\n        </deckgo-highlight-code>\n<h2>Deploying in EKS</h2>\n<p>Since you've got an EKS cluster in the first place, you might as well deploy the proxy in it.</p>\n<p>The repository provides a Helm chart for that, in the\n<code>k8s</code>\ndirectory of the GitHub project.</p>\n<p>You can simply instantiate the chart with the following values:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">eks_cluster_id: &quot;&lt;EKS_CLUSTER_ID&gt;&quot;\npsk: &quot;mysecretstring&quot;\naws:\n  access_key_id: &quot;&lt;AWS_ACCESS_KEY_ID&gt;&quot;\n  secret_access_key: &quot;&lt;AWS_SECRET_ACCESS_KEY&gt;&quot;</code>\n        </deckgo-highlight-code>\n<p>The AWS credentials will be stored in a Kubernetes secret and passed to the container.</p>\n<h3>Using</h3>\n<p>However, since we're in AWS, we can also use <a href=\"https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html\">IAM roles for service accounts</a> and bypass the access keys altogether. This is a much cleaner approach.</p>\n<p>Here's how to do it, using Terraform to create the role and deploy the proxy.</p>\n<p>First, create a role linked to OIDC:</p>\n<deckgo-highlight-code hcl   highlight-lines=\"\">\n          <code slot=\"code\">module &quot;iam_assumable_role_aws_iam_authenticator_proxy&quot; {\n  source                        = &quot;terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc&quot;\n  version                       = &quot;3.3.0&quot;\n  create_role                   = true\n  number_of_role_policy_arns    = 0\n  role_name                     = &quot;aws-iam-authenticator-proxy&quot;\n  provider_url                  = replace(module.cluster.cluster_oidc_issuer_url, &quot;https://&quot;, &quot;&quot;)\n  oidc_fully_qualified_subjects = [&quot;system:serviceaccount:yournamespace:aws-iam-authenticator-proxy&quot;]\n}</code>\n        </deckgo-highlight-code>\n<p>replacing\n<code>yournamespace</code>\nwith the Kubernetes namespace where you will be deploying the proxy.</p>\n<p>Now we can configure the cluster to use map that role to the Kubernetes role we want (e.g.\n<code>system::masters</code>\nto make it cluster admin).</p>\n<p>We'll create a random PSK and generate the Kubeconfig file to use\n<code>curl</code>\nwith the proxy:</p>\n<deckgo-highlight-code hcl   highlight-lines=\"\">\n          <code slot=\"code\">data &quot;aws_vpc&quot; &quot;this&quot; {\n  id = var.vpc_id\n}\n\ndata &quot;aws_subnet_ids&quot; &quot;private&quot; {\n  vpc_id = data.aws_vpc.this.id\n\n  tags = {\n    &quot;kubernetes.io/role/internal-elb&quot; = &quot;1&quot;\n  }\n}\n\nmodule &quot;cluster&quot; {\n  source  = &quot;terraform-aws-modules/eks/aws&quot;\n  version = &quot;13.1.0&quot;\n\n  cluster_name    = var.cluster_name\n  cluster_version = &quot;1.18&quot;\n\n  subnets          = data.aws_subnet_ids.private.ids\n  vpc_id           = var.vpc_id\n  enable_irsa      = true\n  map_roles        = [\n    {\n      rolearn  = module.iam_assumable_role_aws_iam_authenticator_proxy.this_iam_role_arn,\n      username = module.iam_assumable_role_aws_iam_authenticator_proxy.this_iam_role_name,\n      groups   = [&quot;system:masters&quot;]\n    },\n  ]\n\n  worker_groups = [\n    {\n      instance_type        = &quot;m5a.large&quot;\n      asg_desired_capacity = 2\n      asg_max_size         = 3\n    }\n  ]\n\n  kubeconfig_aws_authenticator_command = &quot;curl&quot;\n  kubeconfig_aws_authenticator_command_args\t= [\n    &quot;-s&quot;,\n    &quot;https://${var.auth_url}/?psk=${random_password.auth_proxy_psk.result}&quot;,\n  ]\n}\n\nresource &quot;random_password&quot; &quot;auth_proxy_psk&quot; {\n  length  = 16\n  special = false\n}</code>\n        </deckgo-highlight-code>\n<p>Finally, we can deploy the proxy in Kubernetes using Helm:</p>\n<deckgo-highlight-code hcl   highlight-lines=\"\">\n          <code slot=\"code\">resource &quot;helm_release&quot; &quot;aws-iam-authenticator-proxy&quot; {\n  name              = &quot;aws-iam-authenticator-proxy&quot;\n  chart             = &quot;https://github.com/camptocamp/aws-iam-authenticator-proxy/tree/master/k8s&quot;\n  namespace         = &quot;aws-iam-authenticator-proxy&quot;\n  dependency_update = true\n  create_namespace  = tr\n\n  values = [\n    &lt;&lt; EOT\neks_cluster_id: &quot;${var.cluster_name}&quot;\npsk: &quot;${random_password.auth_proxy_psk.result}&quot;\nserviceAccount:\n  name: &quot;aws-iam-authenticator-proxy&quot;\n  annotations:\n    eks.amazonaws.com/role-arn: ${module.iam_assumable_role_aws_iam_authenticator_proxy.this_iam_role_arn}\nEOT\n  ]\n\n  depends_on = [\n    module.cluster,\n  ]\n}</code>\n        </deckgo-highlight-code>\n<p>You can add an\n<code>Ingress</code>\nor configure the\n<code>Service</code>\nto use an L4\n<code>LoadBalancer</code>\nby tuning the Helm values.</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/a-simple-auth-proxy-for-eks-24dh\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/add-puppet-tag-142l/","relativePath":"posts/add-puppet-tag-142l.md","relativeDir":"posts","base":"add-puppet-tag-142l.md","name":"add-puppet-tag-142l","frontmatter":{"title":"Add #puppet tag","stackbit_url_path":"posts/add-puppet-tag-142l","date":"2020-06-12T06:08:13.053Z","excerpt":"It would be great to attract more DevOps-related content to dev.to. With a few other people, I've sta...","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--X2kztkXc--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://www.camptocamp.com/wp-content/uploads/xformations_puppet1-720x400.png.pagespeed.ic.UU2oY1Zlj8.webp","comments_count":3,"positive_reactions_count":2,"tags":["meta","puppet","puppetize","discuss"],"canonical_url":"https://dev.to/raphink/add-puppet-tag-142l","template":"post"},"html":"<p>It would be great to attract more DevOps-related content to dev.to. With a few other people, I've started blogging about Puppet and would really appreciate if it could become an official tag!</p>\n<p><em><a href=\"https://dev.to/raphink/add-puppet-tag-142l\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/automatic-renewal-of-puppet-certificates-28pm/","relativePath":"posts/automatic-renewal-of-puppet-certificates-28pm.md","relativeDir":"posts","base":"automatic-renewal-of-puppet-certificates-28pm.md","name":"automatic-renewal-of-puppet-certificates-28pm","frontmatter":{"title":"Automatic Renewal of Puppet Certificates","stackbit_url_path":"posts/automatic-renewal-of-puppet-certificates-28pm","date":"2020-05-04T06:36:04.499Z","excerpt":"Everyone who has been using Puppet with a self-signed CA for more than 5 years knows that dreaded time: the time when the CA must be renewed.","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--X2kztkXc--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://www.camptocamp.com/wp-content/uploads/xformations_puppet1-720x400.png.pagespeed.ic.UU2oY1Zlj8.webp","comments_count":0,"positive_reactions_count":7,"tags":["puppet","devops","opensource","security"],"canonical_url":"https://www.camptocamp.com/actualite/automatic-renewal-of-puppet-certificates/","template":"post"},"html":"<p>Everyone who has been using <a href=\"https://puppet.com/\">Puppet</a> with a self-signed CA for over 5 years knows the dreaded time: the time when the CA must be renewed.</p>\n<h1>Renewing the CA</h1>\n<p>The traditional approach is to create a new CA, and then use another mean to renew the certificates for all the nodes (SSH, MCollective, Ansible, etc.).</p>\n<p>Another possibility is to keep the same CA keys and generate a new CA certificate. There is actually <a href=\"https://github.com/puppetlabs/puppetlabs-certregen\">an official module to do that</a>. This module allows you to easily revive a CA that is about to expire in such a way that the new CA certificate is valid for current node certificates. As a result, you don't need to renew the node certificates right away; you only need to distribute the new CA certificate to ensure the cached version does not expire!</p>\n<p>There is however a consequence: node certificates will start to expire. Nodes over 5 years old will expire as soon as the CA expires. And so they need to be renewed, too.</p>\n<h1>Renewing the node certificates</h1>\n<p>What if the Puppet agent itself could ensure its certificate is always\nvalid? This can be achieved using two things:</p>\n<ul>\n<li>an autosign policy;</li>\n<li>the\n<code>puppet_certificate</code>\nresource type.</li>\n</ul>\n<h2>Autosign</h2>\n<p>The former is a <a href=\"https://puppet.com/docs/puppet/5.3/ssl_autosign.html#%20policy-based-autosigning\">standard Puppet feature</a>, with a simple principle: embark a secret in the Puppet CSR, which will be checked by a script on the CA.</p>\n<p>This approach allows to easily automate node provisioning by making nodes automatically register into Puppet.</p>\n<p>The simplest form uses a shared password in the\n<code>csr_attributes.yaml</code>\nfile on the Puppet node.</p>\n<h2>Managing certificates</h2>\n<p>Once you have put together a way to autosign certificates, let's see how to automatically renew these certificates. We'll use the <a href=\"https://github.com/reidmv/puppet-module-puppet_certificate\">puppet_certificate Puppet module</a> for that. Here is the kind of code you could use:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">class profile::puppet::certificate (\n  String $psk,\n) {\n  file { &#39;/etc/puppetlabs/puppet/csr_attributes.yaml&#39;:\n    ensure  =&gt; file,\n    owner   =&gt; &#39;root&#39;,\n    group   =&gt; &#39;root&#39;,\n    mode    =&gt; &#39;0440&#39;,\n    content =&gt; &quot;---\\ncustom_attributes:\\n  1.2.840.113549.1.9.7: &#39;${psk}&#39;\\n&quot;,\n  }\n  ~&gt; puppet_certificate { $::trusted[&#39;certname&#39;]:\n    ensure               =&gt; valid,\n    onrefresh            =&gt; &#39;regenerate&#39;,\n    waitforcert          =&gt; 60,\n    renewal_grace_period =&gt; 20,\n    clean                =&gt; true,\n  }\n}</code>\n        </deckgo-highlight-code>\n<p>This code will:</p>\n<ul>\n<li>manage the\n<code>csr_attributes.yaml</code>\nfile to inject the psk into it;</li>\n<li>manage the Puppet certificate of the node.</li>\n</ul>\n<p>In addition:</p>\n<ul>\n<li>If the psk is modified, the certificate will be recreated;</li>\n<li>The certificate will automatically be renewed 20 days before it expires (using\n<code>ensure => valid</code>\n);</li>\n</ul>\n<p>Note that this only works if the certificate is cleaned from the Puppet CA before it gets regenerated. This is the point of the\n<code>clean => true</code>\nattribute. By default, however, the Puppet CA does not accept remote cleaning of certificates. You can allow nodes to clean their own certificates (and no other) by adding this to your Puppetserver's\n<code>auth.conf</code>\nfile:</p>\n<deckgo-highlight-code ruby   highlight-lines=\"\">\n          <code slot=\"code\">{\n    name: &quot;Allow nodes to delete their own certificates&quot;,\n    match-request: {\n        path: &quot;^/puppet-ca/v1/certificate(_status|_request)?/([^/]+)$&quot;,\n        type: regex,\n        method: [delete]\n    },\n    allow: &quot;$2&quot;,\n    sort-order: 500\n}</code>\n        </deckgo-highlight-code>\n<h2>Better security</h2>\n<p>While it is possible to use a simple shared password in the\n<code>csr_attributes.yaml</code>\nfor autosigning, it means all your nodes will contain that psk, which is valid to create new certificates on the Puppet CA. This is not very secure, and this can be improved by using unique tokens for each node, so that a token can only be used to generate a certificate for a specific node.</p>\n<p>You could achieve this with tools such as Vault. Another idea is to generate a composite secret on the Puppet master by mixing the psk with the certname and possibly any certificate extension you want to enforce. You then need to adapt your autosign policy script to generate the same composite secret, which ensures that each node can only generate its own certificate, without changing its extensions.</p>\n<p><em>This post was originally published on <a href=\"https://www.camptocamp.com/actualite/automatic-renewal-of-puppet-certificates/\">camptocamp.com</a></em></p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/automatic-renewal-of-puppet-certificates-28pm\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/automating-freeipa-with-terraform-43b7/","relativePath":"posts/automating-freeipa-with-terraform-43b7.md","relativeDir":"posts","base":"automating-freeipa-with-terraform-43b7.md","name":"automating-freeipa-with-terraform-43b7","frontmatter":{"title":"Automating FreeIPA with Terraform","stackbit_url_path":"posts/automating-freeipa-with-terraform-43b7","date":"2020-04-30T10:48:36.114Z","excerpt":"The FreeIPA Terraform provider allows to automate creation and management of FreeIPA resources.","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--EFbU1JTl--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://www.camptocamp.com/wp-content/uploads/xterraform_bandeau.png.pagespeed.ic.neAGqH-_lX.webp","comments_count":0,"positive_reactions_count":0,"tags":["terraform","devops","freeipa","opensource"],"canonical_url":"https://www.camptocamp.com/actualite/automating-freeipa-with-terraform/","template":"post"},"html":"<p><a href=\"https://www.terraform.io/\">Terraform</a> is great for cloud provisioning and has now become a standard tool to deploy infrastructures as code, in a DevOps fashion.</p>\n<p><a href=\"https://www.terraform.io/docs/providers/index.html\">Many plugins</a> exist to cover specific needs, from major cloud providers (AWS, GCP, Azure, etc.) to specific app APIs (Grafana, GitHub, or even PostgreSQL). The community provides and maintains <a href=\"https://www.terraform.io/docs/providers/type/community-index.html\">additional providers</a> which can be installed and used in any Terraform project as plugins.</p>\n<p>Camptocamp developed several providers over the last few years. Besides\nthe [official Rancher provider] (<a href=\"https://www.terraform.io/docs/providers/rancher/index.html\">https://www.terraform.io/docs/providers/rancher/index.html</a>) which was co-developed by our team and contributed to the community, we maintain providers to integrate Terraform with the <a href=\"https://github.com/camptocamp/terraform-provider-puppetca\">PuppetCA</a>, the <a href=\"https://github.com/camptocamp/terraform-provider-puppetdb\">PuppetDB</a>, as well as the <a href=\"https://github.com/camptocamp/terraform-provider-pass\">gopass password vault</a>.</p>\n<p>More recently, we started having a need to automate FreeIPA resources using Terraform, so we started <a href=\"https://github.com/camptocamp/terraform-provider-freeipa\">a new provider</a>.</p>\n<h1>Installing</h1>\n<p>Installing additional Terraform providers is <a href=\"https://www.terraform.io/docs/configuration/providers.html#%20third-party-plugins\">rather straightforward</a>.\nYou can simply download the binary from the <a href=\"https://github.com/camptocamp/terraform-provider-freeipa/releases\">releases page</a> and\ndrop it in your\n<code>~/.terraform.d/plugins</code>\ndirectory.</p>\n<h1>Usage</h1>\n<p>Like all other Terraform providers, you first need to configure the provider. You can do that using either hardcoded parameters or environment variables. In this second case, we strongly encourage you to make use of <a href=\"https://github.com/cyberark/summon\">summon</a> as a wrapper to dynamically expose the environment variables at call time.</p>\n<deckgo-highlight-code hcl   highlight-lines=\"\">\n          <code slot=\"code\">provider freeipa {\n  host = &quot;ipa.example.test&quot; # or set $FREEIPA_HOST\n  username = &quot;admin&quot; # or set $FREEIPA_USERNAME\n  password = &quot;P@S5sw0rd&quot; # or set $FREEIPA_PASSWORD\n  insecure = true\n}</code>\n        </deckgo-highlight-code>\n<p>Next, you can start writing resources to manage FreeIPA hosts and DNS  records:</p>\n<deckgo-highlight-code hcl   highlight-lines=\"\">\n          <code slot=\"code\">resource freeipa_host &quot;foo&quot; {\n  fqdn = &quot;foo.example.test&quot;\n  description = &quot;This is my foo host&quot;\n  force = true\n  random = true\n  userpassword = &quot;abcde&quot;\n}\n\nresource freeipa_dns_record &quot;bar&quot; {\n  idnsname = &quot;bar&quot;\n  dnszoneidnsname = &quot;myzone&quot;\n  dnsttl = 20\n  records = [&quot;1.2.3.4&quot;]\n}</code>\n        </deckgo-highlight-code>\n<p>At the moment, this FreeIPA provider only features 2 resource types, to manage FreeIPA hosts and DNS records. Don't hesitate to <a href=\"https://github.com/camptocamp/terraform-provider-freeipa\">contribute to it</a> by providing more resource types!</p>\n<p><em>This post was originally published on <a href=\"https://www.camptocamp.com/actualite/automating-freeipa-with-terraform/\">https://www.camptocamp.com/actualite/automating-freeipa-with-terraform/</a></em></p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/automating-freeipa-with-terraform-43b7\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/backup-your-container-data-2f3f/","relativePath":"posts/backup-your-container-data-2f3f.md","relativeDir":"posts","base":"backup-your-container-data-2f3f.md","name":"backup-your-container-data-2f3f","frontmatter":{"title":"Backup your Container Data","stackbit_url_path":"posts/backup-your-container-data-2f3f","date":"2020-04-30T10:48:01.446Z","excerpt":"Containers have become a great facility to easily deploy applications, whether locally or on orchestrated clusters. However, containers are ephemeral, meaning their data should be stored externally and should be backed up.","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--Qh3lKBaH--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://www.camptocamp.com/wp-content/uploads/banner.png","comments_count":0,"positive_reactions_count":5,"tags":["opensource","backup","showdev","kubernetes"],"canonical_url":"https://www.camptocamp.com/actualite/backup-your-container-data/","template":"post"},"html":"<p>Containers have become a great facility to easily deploy applications, whether locally or on orchestrated clusters.</p>\n<p>However, containers are ephemeral, meaning their data should be stored externally. When possible, they can be stored using databases or object storage. Most often though, you will need to resort to using data volumes, mounted inside your containers. How then can be perform a backup of this data?</p>\n<h1>Data location is known</h1>\n<p>Contrarily to the traditional situation in application deployment, the location of critical data in containers is known, since it uses named volumes. We can thus connect to the Docker socket or the API managing the volumes to list them and perform the backups.</p>\n<h1>Introducing Bivac</h1>\n<p><a href=\"https://camptocamp.github.io/bivac/\">Bivac</a> is a tool created to do just that. It can be plugged to either a Docker socket, a Rancher API, or a Kubernetes server. It will then list the volumes on the platform and automatically back them up on a regular basis, using <a href=\"https://restic.net/\">Restic</a> to transfer the data to an object storage provider (e.g. AWS S3).</p>\n<p><img src=\"https://www.camptocamp.com/wp-content/uploads/bivac-435x400.png\" alt=\"Bivac Logo\"></p>\n<p>In addition, Bivac can provide metrics on the backup statuses as it exposes a <a href=\"https://prometheus.io/\">Prometheus</a> endpoint.</p>\n<p>Using the REST client, backups can be listed, executed on demand, and it is also possible to restore volumes.</p>\n<h1>Installation</h1>\n<p>Bivac can easily be installed a binary or a container. Here are some examples, deploying it locally on Docker, or using Kubernetes.</p>\n<h2>Using Docker</h2>\n<p>The following\n<code>docker-compose.yml</code>\nfile can be used to deploy the Bivac manager:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">---\nversion: &#39;3&#39;\nservices:\n  bivac:\n    image: camptocamp/bivac:2.2\n    command: &quot;manager -v&quot;\n    ports:\n      - &quot;8182:8182&quot;\n    volumes:\n      - &quot;/var/run/docker.sock:/var/run/docker.sock:ro&quot;\n    environment:\n      BIVAC_AGENT_IMAGE: camptocamp/bivac:2.1\n      BIVAC_SERVER_PSK: super-secret-psk\n      RESTIC_PASSWORD: not-so-good-password\n      BIVAC_TARGET_URL: s3:my-bucket\n      AWS_ACCESS_KEY_ID: XXXXX\n      AWS_SECRET_ACCESS_KEY: XXXXX</code>\n        </deckgo-highlight-code>\n<p>Additionally, you can also deploy a local Prometheus server to retrieve the metrics. See <a href=\"https://github.com/camptocamp/bivac/blob/master/contrib/examples/docker-compose/docker-compose.yml\">the full example</a>.</p>\n<h2>Using Kubernetes</h2>\n<p>The easiest way to deploy a Bivac manager on Kubernetes is to use Camptocamp's Helm chart:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ helm repo add camptocamp http://charts.camptocamp.com\n$ helm install camptocamp/bivac --version 1.0.0</code>\n        </deckgo-highlight-code>\n<h1>Using the CLI</h1>\n<p>The CLI can be downloaded from <a href=\"https://github.com/camptocamp/bivac/releases/tag/2.2.0\">the releases page</a>. Once the binary is installed, you can use it to list backups, perform backups, or restore data.</p>\n<h2>Connecting to the manager</h2>\n<p>The CLI needs to be connected to the Bivac manager, using its HTTP URL and PSK (defined in the deployment). This can be performed using either the\n<code>--remote.address</code>\nand\n<code>--server.psk</code>\noptions, or by setting the\n<code>BIVAC_REMOTE_ADDRESS</code>\nand\n<code>BIVAC_SERVER_PSK</code>\n.</p>\n<h2>Listing backups</h2>\n<p>The\n<code>bivac volumes</code>\ncommand lets you list the volumes managed by Bivac:</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/gist?args=https%3A%2F%2Fgist.github.com%2Fraphink%2Ffe24bf6bc1205633471432f02ec13c15%20file%3Dbivac-volumes.sh\" style=\"border: 0; width: 100%;\"></iframe>\n<h2>Perform backups</h2>\n<p>While Bivac automatically performs backups at a regular interval, the CLI can also be used to trigger backups manually:</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/gist?args=https%3A%2F%2Fgist.github.com%2Fraphink%2Ffe24bf6bc1205633471432f02ec13c15%20file%3Dbivac-backup.sh\" style=\"border: 0; width: 100%;\"></iframe>\n<h2>Restore data</h2>\n<p>Bivac stores restic backups on object storage and lets you restore them using the\n<code>backup restore</code>\ncommand:</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/gist?args=https%3A%2F%2Fgist.github.com%2Fraphink%2Ffe24bf6bc1205633471432f02ec13c15%20file%3Dbivac-restore.sh\" style=\"border: 0; width: 100%;\"></iframe>\n<h1>Going further</h1>\n<p>More features are available, such as the ability to <a href=\"https://github.com/camptocamp/bivac/wiki/Usage#%20manage-a-remote-restic-repository\">manage a remote Restic repository</a>.  See the documentation for more information.</p>\n<h1>Conclusion</h1>\n<p>Bivac allows to easily backup data, monitor their status and restore them, whether you are using raw Docker, Rancher volumes or Kubernetes.</p>\n<p><em>This post was originally published on <a href=\"https://www.camptocamp.com/actualite/backup-your-container-data/\">https://www.camptocamp.com/actualite/backup-your-container-data/</a></em></p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/backup-your-container-data-2f3f\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/bitten-by-ha-puppetdb-postgresql-1eld/","relativePath":"posts/bitten-by-ha-puppetdb-postgresql-1eld.md","relativeDir":"posts","base":"bitten-by-ha-puppetdb-postgresql-1eld.md","name":"bitten-by-ha-puppetdb-postgresql-1eld","frontmatter":{"title":"Bitten by HA: PuppetDB & PostgreSQL","stackbit_url_path":"posts/bitten-by-ha-puppetdb-postgresql-1eld","date":"2020-05-22T14:32:17.632Z","excerpt":"When PuppetDB started misbehaving, it took us quite a while to realize the problem was somewhere else…","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--mZqZZYCG--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/ux2ltllgi7mjem72mf2z.png","comments_count":0,"positive_reactions_count":5,"tags":["puppet","postgres","ha","debugging"],"canonical_url":"https://dev.to/camptocamp-ops/bitten-by-ha-puppetdb-postgresql-1eld","template":"post"},"html":"<p>Last Wednesday morning, a colleague informed me that our internal Puppet infrastructure was performing slowly. Looking at the Puppetboard, I quickly realized there was another issue: all the nodes were marked as unreported, with the last report dating from more than 24 hours in the past.</p>\n<p>I checked the PuppetDB logs and saw that the reports were coming fine and being saved, so something else was wrong.</p>\n<h2>PuppetDB Upgrade</h2>\n<p>After a few hours of debugging, I still had no clue so I resorted to the option of upgrading the PuppetDB. I ideally wanted to stay with PuppetDB 5.x to avoid a major upgrade, but there was no available official Docker image for PuppetDB 5.x above 5.2.7 (which we are using).</p>\n<p>So I upgraded to PuppetDB 6.9.1 and then PuppetDB 6.10.1.</p>\n<h2>PostgreSQL SSL</h2>\n<p>At first, the connection with PostgreSQL failed as PuppetDB now defaults to fully verifying the PostgreSQL SSL certificate against the Puppet CA. So I had to modify the PostgreSQL connection string to include\n<code>?sslmode=require</code>\nand restart the PuppetDB.</p>\n<h2>Jolokia Authorization</h2>\n<p>The logs seemed fine at first, but the Puppetboard kept returning an error 500. That's when I realized that PuppetDB now required an access file to allow hosts by IP range. I modified our Helm chart to include <a href=\"https://github.com/voxpupuli/puppetboard/issues/566#%20issuecomment-622234110\">a new configuration file</a>, but Puppetboard still wasn't happy.</p>\n<h2>JDBC errors</h2>\n<p>I restarted the PuppetDB and started seeing weird log lines looping infinitely, right after the migration step in startup:</p>\n<deckgo-highlight-code logs   highlight-lines=\"\">\n          <code slot=\"code\">2020-05-20 18:55:40,169 WARN  [clojure-agent-send-off-pool-1] [p.p.jdbc] Caught exception. Last attempt, throwing exception.</code>\n        </deckgo-highlight-code>\n<p>At this point, <a href=\"https://twitter.com/csharpsteen\">Charlie Sharpsteen</a> started helping me to debug the issue.</p>\n<p>I tried manual\n<code>curl</code>\nrequests to the PuppetDB and the logs started sprouting stack traces mentioning both JDBC connection issues as well as the database schema version being wrong:</p>\n<deckgo-highlight-code logs   highlight-lines=\"\">\n          <code slot=\"code\">...\nCaused by: java.sql.SQLTransientConnectionException: PDBReadPool - Connection is not available, request timed out after 3000ms.\n...\nCaused by: com.zaxxer.hikari.pool.PoolBase$ConnectionSetupException: org.postgresql.util.PSQLException: ERROR: Please run PuppetDB with the migrate option set to true\n                 to upgrade your database. The detected migration level 66 is\n                 out of date.\n...</code>\n        </deckgo-highlight-code>\n<p>I connected to the PostgreSQL database and checked it. Everything looked fine, and\n<code>select max(version) from schema_migrations;</code>\nreturned\n<code>74</code>\nas expected, not\n<code>66</code>\n. So where did this number come from? Charlie started suspecting that there was another database involved…</p>\n<p>Totally out of other options, I decided to remove the lines with versions above 66 in the\n<code>schema_migrations</code>\ntable and see if restarting the PuppetDB would finalize the migration. That was a huge failure, as the migration scripts are not idempotent, as could be expected.</p>\n<p>I was left with only one option: dropping the database and restoring it.\nBut then PostgreSQL refused to drop the database, saying it was read-only. I tried forcing read-write, but the database was marked in recovery.</p>\n<p>That's when I gave up for the day (as it was already past 23:00). I turned off the PuppetDB service entirely (scaled the deployment to 0 replicas actually) and went to bed, letting the nodes apply catalogs from cache for the next 30 hours (since Thursday was off).</p>\n<h2>DNS Issue</h2>\n<p>This morning, we got back to debugging this problem, and things started making more sense.</p>\n<p>First off, it turned out I was trying to drop the database on a slave cluster. I had ended up on the slave by using a production CNAME DNS entry which pointed to both the master and slave in round-robin…</p>\n<p>Once my colleague <a href=\"https://github.com/Vampouille\">Julien</a> had helped me realize that, he was able to drop the database on the master. We restarted the PuppetDB in version 6.10.1. But the errors were still there…</p>\n<h2>The Data is still there</h2>\n<p>We rolled back to PuppetDB 5.2.7 and a clean database… Everything started fine, but the Puppetboard still showed all nodes as unreported! Where could it get these nodes if the database had been wiped‽</p>\n<p>This led us to the conclusion that the data was still somewhere else… on the slave…</p>\n<h1>The problem</h1>\n<p>So here's what happened.</p>\n<h2>The root cause</h2>\n<p>Earlier this week, there was an outage with the object storage facility we're using for wal-g, our PostgreSQL backup tool.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/ux2ltllgi7mjem72mf2z.png\" alt=\"PostgreSQL cluster losing synchronization\"></p>\n<p>This led to a disk full on our master PostgreSQL machine. The disk full was very short as PostgreSQL restarted and removed all its WALs so it went unnoticed. However, this also broke replication, so the slave PostgreSQL database ended up stuck on that date. For some reason, we missed the replication alert.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/ayytz0tufaf6mrh83q29.png\" alt=\"Wal segments lost\"></p>\n<h2>The PuppetDB symptoms</h2>\n<p>PuppetDB is configured to write to master and read from slave. This is why all our nodes were unreported in Puppetboard (since they came from slave), even though PuppetDB kept writing the reports properly (in master)! This also explains the weird errors after upgrading to PuppetDB 6, since migration was properly done on the master (to schema v74) but read requests went to the slave (stuck in schema v66).</p>\n<h1>The solution</h1>\n<p>Since we had wiped the master's database this morning, we ended up restoring from the slave's version, and going back to PuppetDB 5.2.7 until we can properly solve the Jolokia potential issues with external access to the PuppetDB API.</p>\n<p>All nodes in Puppetboard have now returned to normal.</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/bitten-by-ha-puppetdb-postgresql-1eld\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/colored-wrappers-for-kubectl-2pj1/","relativePath":"posts/colored-wrappers-for-kubectl-2pj1.md","relativeDir":"posts","base":"colored-wrappers-for-kubectl-2pj1.md","name":"colored-wrappers-for-kubectl-2pj1","frontmatter":{"title":"Colored wrappers for kubectl","stackbit_url_path":"posts/colored-wrappers-for-kubectl-2pj1","date":"2020-10-06T19:50:58.078Z","excerpt":"Kubectl commands, but in color","thumb_img_path":null,"comments_count":1,"positive_reactions_count":11,"tags":["kubernetes","devops","cli","zsh"],"canonical_url":"https://dev.to/raphink/colored-wrappers-for-kubectl-2pj1","template":"post"},"html":"<p>When using Kubernetes,\n<code>kubectl</code>\nis the command we use the most to visualize and debug objects.</p>\n<p>However, it currently does not support colored output, though there is <a href=\"https://github.com/kubernetes/kubectl/issues/524\">a feature request opened for this</a>.</p>\n<p>Let's see how we can add color support. I'll be using zsh with <a href=\"https://ohmyz.sh/\">oh my zsh</a>.</p>\n<p><em>Edit:</em> this feature was <a href=\"https://github.com/ohmyzsh/ohmyzsh/pull/9316\">merged in oh my zsh</a>, so it is now standard.</p>\n<h1>Zsh plugin</h1>\n<p>Let's make this extension into a zsh plugin called\n<code>kubectl_color</code>\n:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">❯ mkdir -p ~/.oh-my-zsh/custom/plugins/kubectl_color\n❯ touch ~/.oh-my-zsh/custom/plugins/kubectl_color/kubectl_color.plugin.zsh</code>\n        </deckgo-highlight-code>\n<p>Now we need to fill in this plugin.</p>\n<h2>JSON colorizing</h2>\n<p>Let's start with JSON, by adding an alias that colorizes JSON output using the infamous <a href=\"https://stedolan.github.io/jq/\">\n<code>jq</code>\n</a>:</p>\n<deckgo-highlight-code zsh   highlight-lines=\"\">\n          <code slot=\"code\">kj() {\n  kubectl &quot;$@&quot; -o json | jq\n}\n\ncompdef kj=kubectl</code>\n        </deckgo-highlight-code>\n<p>The\n<code>compdef</code>\nline ensures the\n<code>kj</code>\nfunction gets autocompleted just like\n<code>kubectl</code>\n.</p>\n<p><em>Edit:</em> I've added another wrapper for <a href=\"https://github.com/antonmedv/fx\">\n<code>fx</code>\n</a>, which provides a dynamic way to parse JSON:</p>\n<deckgo-highlight-code zsh   highlight-lines=\"\">\n          <code slot=\"code\">kjx() {\n  kubectl &quot;$@&quot; -o json | fx\n}\n\ncompdef kjx=kubectl</code>\n        </deckgo-highlight-code>\n<h2>YAML colorizing</h2>\n<p>Just like for JSON, we can use <a href=\"https://stedolan.github.io/jq/\">\n<code>yh</code>\n</a> to colorize YAML output:</p>\n<deckgo-highlight-code zsh   highlight-lines=\"\">\n          <code slot=\"code\">ky() {\n  kubectl &quot;$@&quot; -o yaml | yh\n}\n\ncompdef ky=kubectl</code>\n        </deckgo-highlight-code>\n<h1>Energize!</h1>\n<p>Our plugin is now ready, we only need to activate it in\n<code>~/.zshrc</code>\nby adding it to the list of plugins, e.g.:</p>\n<deckgo-highlight-code zsh   highlight-lines=\"\">\n          <code slot=\"code\">plugins=(git ruby kubectl kubectl_color)</code>\n        </deckgo-highlight-code>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/asciinema?args=363827\" style=\"border: 0; width: 100%;\"></iframe>\n<p>and with\n<code>fx</code>\n:</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/asciinema?args=364137\" style=\"border: 0; width: 100%;\"></iframe>\n<p><em><a href=\"https://dev.to/raphink/colored-wrappers-for-kubectl-2pj1\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/decomissioning-with-puppet-report-purge-unmanaged-resources-1jgk/","relativePath":"posts/decomissioning-with-puppet-report-purge-unmanaged-resources-1jgk.md","relativeDir":"posts","base":"decomissioning-with-puppet-report-purge-unmanaged-resources-1jgk.md","name":"decomissioning-with-puppet-report-purge-unmanaged-resources-1jgk","frontmatter":{"title":"Decomissioning with Puppet: report & purge unmanaged resources","stackbit_url_path":"posts/decomissioning-with-puppet-report-purge-unmanaged-resources-1jgk","date":"2020-07-23T14:42:09.777Z","excerpt":"Puppet can let you purge resources you do not manage explicitely","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--X2kztkXc--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://www.camptocamp.com/wp-content/uploads/xformations_puppet1-720x400.png.pagespeed.ic.UU2oY1Zlj8.webp","comments_count":0,"positive_reactions_count":4,"tags":["puppet","devops","cfgmgmt","tutorial"],"canonical_url":"https://dev.to/camptocamp-ops/decomissioning-with-puppet-report-purge-unmanaged-resources-1jgk","template":"post"},"html":"<p>Puppet lets you manage resources explicitely. But did you know you can also dynamically purge unmanaged resources using Puppet?</p>\n<h1>Why?</h1>\n<p>A user in your organization just left, and you need to remove their account from all nodes. If you were managing their account with Puppet —whether with a\n<code>user</code>\nresource type or using an <a href=\"https://forge.puppet.com/modules?q=accounts\">accounts module</a>—, you need to make sure this user is absent:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">user { &#39;jdoe&#39;:\n  ensure =&gt; absent,\n}</code>\n        </deckgo-highlight-code>\n<p>Great. Job done. Now, how long should this resource be kept in your code? One hour? One week? One year? What if an old node that was turned off wakes up months from now with this account activated?</p>\n<p>To be honest, if a node turned off for months suddenly wakes up, you'll probably have more issues than just old users if your Puppet code base is quite active…\nHowever, purging all unknown users would be a much easier approach than managing them explicitely!</p>\n<h1>How?</h1>\n<p>As explained <a href=\"https://dev.to/camptocamp-ops/how-to-manage-files-with-puppet-55e4#%20whole-dynamic-purge\">in a previous post about managing files in Puppet</a>, Puppet has the ability of purging unmanaged resources. I'll let you see the post for more explanations on how this works:</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/link?args=camptocamp-ops%2Fhow-to-manage-files-with-puppet-55e4%23%20whole-dynamic-purge\" style=\"border: 0; width: 100%;\"></iframe>\n<h1>What if I don't want to purge?</h1>\n<p>What if instead of purging, I'd just like Puppet to report the unmanaged resources but not do anything about them?</p>\n<p>Luckily for us,\n<code>noop</code>\nworks fine with the\n<code>purge</code>\ntype, so you can use something like:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">purge { &#39;user&#39;:\n  noop   =&gt; true,\n  unless =&gt; [\n    [&#39;uid&#39;, &#39;&lt;&#39;, &#39;1000&#39;],\n    [&#39;name&#39;, &#39;==&#39;, &#39;nobody&#39;],\n  ],\n}</code>\n        </deckgo-highlight-code>\n<p>This code will mark all users with a UID above 999 (except the\n<code>nobody</code>\nuser) to be purged, but it won't do it. As a result, you'll get\n<code>noop</code>\nresources in your reports, for example in Puppetboard:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/3xu0q5i3e98bv27ih117.png\" alt=\"Noop resources\"></p>\n<p>And then in the report, you'll see the unmanaged users:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/ckpsim9bx6igwp09it4l.png\" alt=\"Report view\"></p>\n<h1>Forcing purge</h1>\n<p>If you see users that should be purged, you can add again a\n<code>user</code>\nresource in your Puppet code to ensure their absence:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">user { &#39;iperf&#39;:\n  ensure =&gt; absent,\n}</code>\n        </deckgo-highlight-code>\n<p>Another option is to make it a bit more dynamic. I've added an option in my\n<code>accounts</code>\nbase class to use a dynamic fact to purge users on demand:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">class osbase::accounts (\n  Boolean $purge_users = str2bool($facts[&#39;purge_users&#39;]),\n) {\n  purge { &#39;user&#39;:\n    noop   =&gt; !$purge_users,\n    unless =&gt; [\n      [&#39;uid&#39;, &#39;&lt;&#39;, &#39;1000&#39;],\n      [&#39;name&#39;, &#39;==&#39;, &#39;nobody&#39;],\n    ],\n  }\n}</code>\n        </deckgo-highlight-code>\n<p>The\n<code>purge_users</code>\nfact doesn't exist by default, so I can define it on the go when I need to purge users.\nNow I can run\n<code>puppet apply</code>\non a node and force purging the users with:</p>\n<deckgo-highlight-code    highlight-lines=\"undefined\">\n          <code slot=\"code\">$ FACTER_purge_users=y puppet agent -t</code>\n        </deckgo-highlight-code>\n<p>And all unmanaged users will be removed from the node!</p>\n<p><em>Do you have specific Puppet needs? <a href=\"https://www.camptocamp.com/contact\">Contact us</a>, we can help you!</em></p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/decomissioning-with-puppet-report-purge-unmanaged-resources-1jgk\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/deploying-public-keys-in-docker-containers-41cd/","relativePath":"posts/deploying-public-keys-in-docker-containers-41cd.md","relativeDir":"posts","base":"deploying-public-keys-in-docker-containers-41cd.md","name":"deploying-public-keys-in-docker-containers-41cd","frontmatter":{"title":"Deploying public keys in Docker containers","stackbit_url_path":"posts/deploying-public-keys-in-docker-containers-41cd","date":"2020-05-08T07:57:33.416Z","excerpt":"One of the hard problems to solve when using Docker in production is deploying secrets. githut_pki makes SSH key deployment easy.","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--jI5YGum6--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/1doni1qp0l9lqk240myf.png","comments_count":0,"positive_reactions_count":6,"tags":["devops","showdev","opensource","docker"],"canonical_url":"https://www.camptocamp.com/en/actualite/deploying-public-keys-in-docker-containers/","template":"post"},"html":"<p>One of the hard problems to solve when using Docker in production is deploying secrets. In particular, public keys are hard to deploy because they are multiline and there is usually one key per authorized user.</p>\n<p>Since all our users have accounts on GitHub with their SSH key, it made sense to us to use GitHub as a centralized PKI for SSH keys. Starting with a simple Ruby script connecting to the GitHub API, we soon realized we would need a generic way of deploying public keys from GitHub if we persisted in this approach.</p>\n<p>This gave birth to the <a href=\"https://github.com/camptocamp/github_pki\" target=\"_blank\">github_pki</a>, a generic command line tool using the GitHub API to deploy SSH and X509 keys from GitHub organizations, teams, and individual users.</p>\n<p>Installing can be done from source:</p>\n<deckgo-highlight-code dockerfile   highlight-lines=\"\">\n          <code slot=\"code\">FROM debian:jessie\n\nENV GOPATH=/go\nRUN apt-get update &amp;&amp; apt-get install -y golang-go git \\\n  &amp;&amp; go get github.com/camptocamp/github_pki \\\n  &amp;&amp; apt-get autoremove -y golang-go git \\\n  &amp;&amp; rm -rf /var/lib/apt/lists/*</code>\n        </deckgo-highlight-code>\n<p>Or by inheriting one of the <a href=\"https://hub.docker.com/r/camptocamp/github_pki/tags/\" target=\"_blank\">official Docker images</a>.</p>\n<p>The <tt>github_pki</tt> command can then simply be called from within an entrypoint script to deploy keys:</p>\n<deckgo-highlight-code bash   highlight-lines=\"\">\n          <code slot=\"code\"># !/bin/sh\n\n# Deploy users keys as X509 public keys to SSL_DIR\nSSL_DIR=/etc/puppetlabs/mcollective/clients /go/bin/github_pki\n\n# Deploy user keys as an authorized_keys file\nAUTHORIZED_KEYS=/root/.ssh/authorized_keys /go/bin/github_pki</code>\n        </deckgo-highlight-code>\n<p>Various <a href=\"https://github.com/camptocamp/github_pki# environment-variables\" target=\"_blank\">environment variables</a> can be used to tune which keys should be deployed:</p>\n<deckgo-highlight-code console   highlight-lines=\"\">\n          <code slot=\"code\">$ docker run -e AUTHORIZED_KEYS=/root/.ssh/authorized_keys \\\n             -e SSL_DIR=/etc/test/ssl \\\n             -e GITHUB_ORG=&quot;myorg&quot; \\\n             -e GITHUB_TEAM=&quot;mypals&quot; \\\n             -e GITHUB_USERS=&quot;otheruser&quot; \\\n             -e GITHUB_TOKEN=398d6d326a546d40f3f1ef93345d1fc5ee0f0j38 \\\n             mydockerimage\nrun-parts: executing /docker-entrypoint.d/25-populate-ssl-clients.sh\ntime=&quot;2016-03-22T09:45:52Z&quot; level=info msg=&quot;Adding users for team mypals&quot; \ntime=&quot;2016-03-22T09:45:52Z&quot; level=info msg=&quot;Adding user bob&quot; \ntime=&quot;2016-03-22T09:45:52Z&quot; level=info msg=&quot;Adding user alice&quot; \ntime=&quot;2016-03-22T09:45:52Z&quot; level=info msg=&quot;Adding individual user otheruser&quot; \ntime=&quot;2016-03-22T09:45:53Z&quot; level=info msg=&quot;Getting keys for user bob&quot; \ntime=&quot;2016-03-22T09:45:53Z&quot; level=info msg=&quot;Getting keys for user alice&quot; \ntime=&quot;2016-03-22T09:45:53Z&quot; level=info msg=&quot;Getting keys for user otheruser&quot;\ntime=&quot;2016-03-22T09:45:59Z&quot; level=info msg=&quot;Generating /root/.ssh/authorized_keys&quot; \ntime=&quot;2016-03-22T09:45:59Z&quot; level=info msg=&quot;Dumping X509 keys to /etc/puppetlabs/mcollective/clients&quot; \ntime=&quot;2016-03-22T09:45:59Z&quot; level=info msg=&quot;Converting key bob/1325852 to X509&quot; \ntime=&quot;2016-03-22T09:45:59Z&quot; level=info msg=&quot;Converting key alice/123756 to X509&quot; \ntime=&quot;2016-03-22T09:45:59Z&quot; level=info msg=&quot;Converting key alice/7845928 to X509&quot; \ntime=&quot;2016-03-22T09:45:59Z&quot; level=info msg=&quot;Converting key otheruser/8540586 to X509&quot;</code>\n        </deckgo-highlight-code>\n<p><em>This blog post was originally published on <a href=\"https://www.camptocamp.com/en/actualite/deploying-public-keys-in-docker-containers/\">camptocamp.com</a></em></p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/deploying-public-keys-in-docker-containers-41cd\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/error-console-highlighting-1c60/","relativePath":"posts/error-console-highlighting-1c60.md","relativeDir":"posts","base":"error-console-highlighting-1c60.md","name":"error-console-highlighting-1c60","frontmatter":{"title":"Error console highlighting","stackbit_url_path":"posts/error-console-highlighting-1c60","date":"2020-05-13T20:47:24.431Z","excerpt":"When composing posts on dev.to, is there a way to dispensary display error messages so they appear as...","thumb_img_path":null,"comments_count":3,"positive_reactions_count":6,"tags":["discuss","question","writing","markdown"],"canonical_url":"https://dev.to/raphink/error-console-highlighting-1c60","template":"post"},"html":"<p>When composing posts on dev.to, is there a way to dispensary display error messages so they appear as console output, but in red?</p>\n<p>I've tried things like:</p>\n<pre class=\"highlight markdown\">\n<code>\n\n```error\nMy error message\n```\n\n</code>\n</pre>\n<p>or</p>\n<deckgo-highlight-code html   highlight-lines=\"\">\n          <code slot=\"code\">&lt;pre&gt;\n&lt;code style=&quot;color:red;font-weight:bold&quot;&gt;\nMy error message\n&lt;/code&gt;\n&lt;/pre&gt;</code>\n        </deckgo-highlight-code>\n<p>to no avail…</p>\n<p>Is there a way to achieve this?</p>\n<p><em><a href=\"https://dev.to/raphink/error-console-highlighting-1c60\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/generic-blog-17ni/","relativePath":"posts/generic-blog-17ni.md","relativeDir":"posts","base":"generic-blog-17ni.md","name":"generic-blog-17ni","frontmatter":{"title":"dev.to as a generic blog?","stackbit_url_path":"posts/generic-blog-17ni","date":"2020-05-02T18:41:25.829Z","excerpt":"Is it a good idea to use dev.to for other subjects than development?","thumb_img_path":null,"comments_count":5,"positive_reactions_count":5,"tags":["discuss","beginners"],"canonical_url":"https://dev.to/raphink/generic-blog-17ni","template":"post"},"html":"<p>Do any of you use dev.to as a generic blog, besides development-related subjects (I'm thinking genealogy for example)?</p>\n<p>What would be the pros and cons?</p>\n<p><em><a href=\"https://dev.to/raphink/generic-blog-17ni\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/getting-puppet-report-metrics-from-puppetdb-6bp/","relativePath":"posts/getting-puppet-report-metrics-from-puppetdb-6bp.md","relativeDir":"posts","base":"getting-puppet-report-metrics-from-puppetdb-6bp.md","name":"getting-puppet-report-metrics-from-puppetdb-6bp","frontmatter":{"title":"Getting Puppet Report Metrics from PuppetDB","stackbit_url_path":"posts/getting-puppet-report-metrics-from-puppetdb-6bp","date":"2020-06-02T09:22:19.303Z","excerpt":"Instead of sending metrics from the Puppetserver to Prometheus, they can be retrieved using the PuppetDB Metrics API.","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--DX3k0Pdb--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/jczukvqf1fs7dau9jnfp.png","comments_count":0,"positive_reactions_count":4,"tags":["puppet","showdev","opensource","devops"],"canonical_url":"https://dev.to/camptocamp-ops/getting-puppet-report-metrics-from-puppetdb-6bp","template":"post"},"html":"<p>Puppet agent run reports contain useful metrics, such as the number of resources that were modified or failed to apply, or how much time each step of the run took.</p>\n<p>The traditional way of retrieving these metrics is using a report processor on the Puppet master.</p>\n<p>Since Prometheus is now a <em>de facto</em> standard in metrics collection, there exists a <a href=\"https://github.com/voxpupuli/puppet-prometheus_reporter\">Prometheus reporter, maintained by the VoxPupuli community</a>. However, it uses a dropzone directory of yaml files with a local node exporter, so it's not a very clean approach.</p>\n<p>On top of this, reports and their metrics are already exported to the PuppetDB, which provides its own API to access this data.</p>\n<h1>Prometheus PuppetDB Exporter</h1>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/github?args=camptocamp%2Fprometheus-puppetdb-exporter\" style=\"border: 0; width: 100%;\"></iframe>\n<p>Prometheus PuppetDB Exporter is a simple go binary that can scrape the PuppetDB for report metrics for Prometheus. It runs independently of the Puppet stack, and can be tuned to collect various types of metrics:</p>\n<ul>\n<li>resources</li>\n<li>time</li>\n<li>changes</li>\n<li>events</li>\n</ul>\n<p>The exporter provides metrics in the form\n<code>puppet_report_&#x3C;type></code>\nfor each of these types.</p>\n<deckgo-highlight-code prometheus   highlight-lines=\"\">\n          <code slot=\"code\"># HELP puppetdb_exporter_build_info puppetdb exporter build informations\n# TYPE puppetdb_exporter_build_info gauge\npuppetdb_exporter_build_info{build_date=&quot;2019-02-18&quot;,commit_sha=&quot;XXXXXXXXXX&quot;,golang_version=&quot;go1.11.4&quot;,version=&quot;1.0.0&quot;} 1\n# HELP puppetdb_node_report_status_count Total count of reports status by type\n# TYPE puppetdb_node_report_status_count gauge\npuppetdb_node_report_status_count{status=&quot;changed&quot;} 1\npuppetdb_node_report_status_count{status=&quot;failed&quot;} 1\npuppetdb_node_report_status_count{status=&quot;unchanged&quot;} 1</code>\n        </deckgo-highlight-code>\n<p>This makes it fully compatible with Vox Pupuli's reporter implementation.</p>\n<h1>Deploying</h1>\n<p>The exporter is provided as a <a href=\"https://hub.docker.com/r/camptocamp/prometheus-puppetdb-exporter\">Docker image</a>, and is included by default in <a href=\"https://github.com/camptocamp/charts/tree/master/puppetdb\">Camptocamp's PuppetDB Helm chart</a>.</p>\n<h1>Usage in Grafana</h1>\n<p>Coupled with (a slightly modified version of) <a href=\"https://grafana.com/grafana/dashboards/700\">Julien Pivotto's Puppet Report dashboard</a>, you can make some pretty graphs from these metrics:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/jczukvqf1fs7dau9jnfp.png\" alt=\"Node graphs\"></p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/getting-puppet-report-metrics-from-puppetdb-6bp\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/github-sponsors-and-dev-to-posts-51b1/","relativePath":"posts/github-sponsors-and-dev-to-posts-51b1.md","relativeDir":"posts","base":"github-sponsors-and-dev-to-posts-51b1.md","name":"github-sponsors-and-dev-to-posts-51b1","frontmatter":{"title":"💡 GitHub Sponsors and dev.to posts","stackbit_url_path":"posts/github-sponsors-and-dev-to-posts-51b1","date":"2020-07-02T05:52:51.122Z","excerpt":"GitHub Sponsors could be leveraged on dev.to to generate revenue ","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--4g3b1qoK--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/qdp4swhb2kngx4cmsowd.png","comments_count":3,"positive_reactions_count":5,"tags":["discuss","github","sponsors","meta"],"canonical_url":"https://dev.to/raphink/github-sponsors-and-dev-to-posts-51b1","template":"post"},"html":"<p>Yesterday, I read <a href=\"https://calebporzio.com/i-just-hit-dollar-100000yr-on-github-sponsors-heres-how-i-did-it\">a blog post by Caleb Porzio</a> about making money from Open Source projects, in particular by leveraging the GitHub sponsors program.</p>\n<p>His conclusion is that the best way to use GitHub Sponsors is to make both free and paid educational content, such that free content attracts an audience, and then they want to support you to access the advanced, paid, content.</p>\n<p>It seems to me that the DEV community, being already connected to GitHub, would be a perfect place to implement this idea.</p>\n<p>There could be a tag in the meta parameters of a post, which would restrict its access (fully or partly, e.g. by hiding the end of the article like lots of newspapers do) to the author's GitHub sponsors:</p>\n<deckgo-highlight-code    highlight-lines=\"undefined\">\n          <code slot=\"code\"># Restrict to sponsors of the raphink GitHub account\nrestrict_sponsors: raphink\n# Restrict to sponsors of tier $10 or above\nrestrict_sponsors: raphink/10</code>\n        </deckgo-highlight-code>\n<p>What do you think? Is that a feature that would be interesting for the DEV community?</p>\n<p><em><a href=\"https://dev.to/raphink/github-sponsors-and-dev-to-posts-51b1\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/integrating-prometheus-with-puppetdb-aom/","relativePath":"posts/integrating-prometheus-with-puppetdb-aom.md","relativeDir":"posts","base":"integrating-prometheus-with-puppetdb-aom.md","name":"integrating-prometheus-with-puppetdb-aom","frontmatter":{"title":"Integrating Prometheus with PuppetDB","stackbit_url_path":"posts/integrating-prometheus-with-puppetdb-aom","date":"2020-04-30T10:48:51.895Z","excerpt":"Many applications are not containerized, and we still need to monitor their nodes. Prometheus PuppetDB SD allows to discover nodes in the PuppetDB and generate Prometheus configurations automatically.","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--GyMJoqm7--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://www.camptocamp.com/wp-content/uploads/xbanner-1.png.pagespeed.ic.-LxmyH1pjm.webp","comments_count":0,"positive_reactions_count":0,"tags":["devops","showdev","puppet","opensource"],"canonical_url":"https://www.camptocamp.com/actualite/integrating-prometheus-with-puppetdb/","template":"post"},"html":"<p>Most companies that have switched their deployments to containers have faced this issue: traditional monitoring systems just don't cut it when it comes to observability of containerized applications. Instead of focusing on nodes and applications running on them, the cluster approach to container orchestration systems requires to target application instances, which can run on multiple nodes —even several times on a single node— and typically have short life spans.</p>\n<h1>A new monitoring paradigm</h1>\n<p>Fortunately for us, Prometheus came early on in the ecosystem, providing an elegant solution to gather metrics from microservices and derive all sort of observability tools from them, including monitoring. Prometheus also allows to monitor the cluster nodes, by returning their metrics and aggregating them into views of their own. Problem solved, we can now get rid of our historical monitoring infrastructure. Or can we?</p>\n<p><img src=\"https://www.camptocamp.com/wp-content/uploads/prometheus-550x120.png\"></p>\n<p>As much as we'd like to think all our applications are now containerized and all our machines are neutral cluster nodes in a large cattle, the reality is often very different. Most companies still have a large quantity of specialized machines---even snowflakes at times--- that are not taken into account by your latest Kubernetes cluster. Should we keep Nagios running for those, or is it possible to make them fit into the new paradigm?</p>\n<h1>Using PuppetDB</h1>\n<p>For those of us running Puppet, the PuppetDB has for years been a great source of information on nodes managed by Puppet. It contains facts, catalogs, reports and more for all the nodes in the fleet. Let's use this information to monitor the nodes and their services dynamically, using Prometheus!</p>\n<p><img src=\"https://www.camptocamp.com/wp-content/uploads/puppet-2-400x400.png\"></p>\n<p><a href=\"https://github.com/camptocamp/prometheus-puppetdb-sd\">Prometheus PuppetDB SD</a> allows to link PuppetDB with your Prometheus infrastructure. At a regular interval, it queries the PuppetDB and retrieves a list of targets. It then outputs a scrape configuration which Prometheus can use. Sounds simple? It really is!</p>\n<h1>Puppet and Prometheus</h1>\n<p>The Vox Pupuli Puppet community has a great module to manage Prometheus. The <a href=\"https://forge.puppet.com/puppet/prometheus\">puppet-prometheus</a> module works out of the box and allows to install and configure a Prometheus server. It also provides the\n<code>prometheus::scrape_job</code>\ndefined type to declare scrape jobs to be added to the server.</p>\n<p>Declaring these scrape jobs as exported resources tags them as such in the PuppetDB, and they can then be realized on the Prometheus server. This is extremely useful, but only works when the Prometheus server is managed by Puppet, not when it is containerized! Prometheus PuppetDB SD fills this gap by scraping the exported resources from the PuppetDB and generating the Prometheus configurations, so you can get the best of both worlds: scrape jobs declared in Puppet alongside your applications, and a containerized Prometheus server!</p>\n<h1>Installation and Configuration</h1>\n<p>There's several ways to install Prometheus PuppetDB SD, but let's face it: if you're using Prometheus, you probably have a Kubernetes cluster already, so let's install it using Helm:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ helm repo add camptocamp http://charts.camptocamp.com\n$ helm install camptocamp/prometheus-puppetdb-sd --version 2.0.3</code>\n        </deckgo-highlight-code>\n<p>The following values should be provided:</p>\n<ul>\n<li></li>\n</ul>\n<p><code>prometheusPuppetdbSd.args.puppetdb.url</code>\n: the PuppetDB URI</p>\n<ul>\n<li></li>\n</ul>\n<p><code>prometheusPuppetdbSd.args.prometheus.proxy-url</code>\n: the Prometheus\nPush-proxy URL</p>\n<ul>\n<li></li>\n</ul>\n<p><code>prometheusPuppetdbSd.args.output.k8s-secret.secret-name</code>\n: the name\nof the k8s secret used to output the Prometheus configuration</p>\n<ul>\n<li></li>\n</ul>\n<p><code>prometheusPuppetdbSd.args.output.k8s-secret.secret-key</code>\n: the key in\nthe k8s secret used for the output file name</p>\n<ul>\n<li></li>\n</ul>\n<p><code>CACert</code>\n: the CA certificate used to authenticate the PuppetDB</p>\n<ul>\n<li></li>\n</ul>\n<p><code>Cert</code>\n: the certificate used to connect to the PuppetDB</p>\n<ul>\n<li></li>\n</ul>\n<p><code>Key</code>\n: the private key used to connect to the PuppetDB</p>\n<p>With the <a href=\"https://github.com/helm/charts/tree/master/stable/prometheus-operator\">official Prometheus Operator</a>, setting\n<code>prometheusSpec.additionalScrapeConfigsExternal</code>\nto\n<code>true</code>\nwill automatically configure Prometheus to mount the secret called\n<code>{{ template \"prometheus-operator.fullname\" . }}-prometheus-scrape-confg</code>\nand use the\n<code>additional-scrape-configs.yaml</code>\nkey in it as additional configuration. This is thus the easiest way to configure Prometheus PuppetDB SD.</p>\n<p>That's it, you're set!</p>\n<p>Don't hesitate to provide feedback and pull requests on the <a href=\"https://github.com/camptocamp/prometheus-puppetdb-sd\">GitHub repository</a>!</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/github?args=camptocamp%2Fprometheus-puppetdb-sd\" style=\"border: 0; width: 100%;\"></iframe>\n<p><em>This post was originally published on <a href=\"https://www.camptocamp.com/actualite/integrating-prometheus-with-puppetdb/\">https://www.camptocamp.com/actualite/integrating-prometheus-with-puppetdb/</a></em></p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/integrating-prometheus-with-puppetdb-aom\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/keep-an-eye-on-your-terraform-states-4lf5/","relativePath":"posts/keep-an-eye-on-your-terraform-states-4lf5.md","relativeDir":"posts","base":"keep-an-eye-on-your-terraform-states-4lf5.md","name":"keep-an-eye-on-your-terraform-states-4lf5","frontmatter":{"title":"Keep an eye on your Terraform states","stackbit_url_path":"posts/keep-an-eye-on-your-terraform-states-4lf5","date":"2020-05-08T08:11:27.873Z","excerpt":"About 4 years ago, we started using Terraform. Many things we were doing manually in the cloud at the time are now coded.","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--EFbU1JTl--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://www.camptocamp.com/wp-content/uploads/xterraform_bandeau.png.pagespeed.ic.neAGqH-_lX.webp","comments_count":0,"positive_reactions_count":6,"tags":["terraform","devops","showdev","opensource"],"canonical_url":"https://www.camptocamp.com/actualite/keep-an-eye-on-your-terraform-states/","template":"post"},"html":"<p><em>This blog post was originally published on <a href=\"https://www.camptocamp.com/actualite/keep-an-eye-on-your-terraform-states/\">camptocamp.com</a></em></p>\n<p>About 4 years ago, we started using Terraform. Many things we were doing manually in the cloud at the time are now coded. As a result, our <a href=\"http://www.terraform.io\">Terraform</a> base code now contains over a hundred states.</p>\n<h1>Terraform everything!</h1>\n<p>A lot of those resources already existed before, some managed by <a href=\"https://aws.amazon.com/cloudformation/\">CloudFormation</a>, others manually. Being able to import resources has helped a lot to integrate new Terraform code with existing infrastructure. We now have a unified system to control them, and most importantly to know who created them, how and why. Collaboration was made easier by using profiles instead of hardcoded credentials, the introduction of remote states stored on AWS S3, as well as state locks on DynamoDB.</p>\n<p>With all this, one thing remained: how do we keep an eye on all these states, resources and locks that are stored on AWS? Could there be a way to visualize and query them?</p>\n<h1>Introducing Terraboard</h1>\n<p><img src=\"https://raw.githubusercontent.com/camptocamp/terraboard/master/logo/terraboard_logo.png\" alt=\"Terraboard\"></p>\n<p><a href=\"https://github.com/camptocamp/terraboard\">Terraboard</a> was born in an attempt to bring an easy-to-use Web Interface for Terraform states.</p>\n<p>It currently supports states stored in AWS S3, as well as locks on DynamoDB. It features 4 views: overview, state view, compare view and search.</p>\n<p>Terraboard requires an S3 bucket with versioning activated (for history and comparison between versions), as well as a PostgreSQL database, where all S3 states will be stored as a data cache.</p>\n<p>Terraboard is comprised of two compontents:</p>\n<ul>\n<li>a server written in Go, which synchronizes the state files from the S3 bucket into the PostgreSQL database, and provides an API for the UI;</li>\n<li>a Web UI written in AngularJS which consumes the API data and serves the Web pages.</li>\n</ul>\n<h2>Overview</h2>\n<p>The overview is the landing page in Terraboard. It provides information about the most recent version of each state, along with the Terraform version used to apply it, its serial, the number of resources it features, and an activity sparkline. Clicking the sparkline lets you easily access any version of a state.</p>\n<p>Graphs present statistics on the main resource types and Terraform versions used, as well as the number of number of states locked (if DynamoDB is configured).</p>\n<p><img src=\"https://www.camptocamp.com/wp-content/uploads/main-550x326.png\" alt=\"Main View\"></p>\n<h2>State view</h2>\n<p>The State view presents details about a state file's resources.  Resources are listed by module and can be filtered. A version selector lets you view historical data for the state.</p>\n<p><img src=\"https://www.camptocamp.com/wp-content/uploads/state-550x328.png\" alt=\"State View\"></p>\n<h2>Compare view</h2>\n<p>While on the State view, you can pick a second version to compare with the current one. This computes differences between the two versions, which displays:</p>\n<ul>\n<li>A list of differences, displayed as a unified diff;</li>\n<li>A list of resources only in the first version;</li>\n<li>A list of resources only in the new version.</li>\n</ul>\n<p><img src=\"https://www.camptocamp.com/wp-content/uploads/compare-550x330.png\" alt=\"Compare View\"></p>\n<h2>Search view</h2>\n<p>If you've ever wondered in which Terraform state a node was managed, you can easily find this out in the Search view. The Search view lets you filter resources and their attributes by type, name, key or value, as well as Terraform version used.</p>\n<p><img src=\"https://www.camptocamp.com/wp-content/uploads/search-1-550x330.png\" alt=\"Search View\"></p>\n<h1>I want to try it!</h1>\n<p>Are you ready to try Terraboard? If you're using Docker, this is very easy. All you need is a PostgreSQL database and your AWS credentials:</p>\n<deckgo-highlight-code console   highlight-lines=\"\">\n          <code slot=\"code\">docker run -d -p 8080:8080 \\\n   -e AWS_REGION=&lt;AWS_DEFAULT_REGION&gt; \\\n   -e AWS_ACCESS_KEY_ID=&lt;AWS_ACCESS_KEY_ID&gt; \\\n   -e AWS_SECRET_ACCESS_KEY=&lt;AWS_SECRET_ACCESS_KEY&gt; \\\n   -e AWS_BUCKET=&lt;terraform-bucket&gt; \\\n   -e AWS_DYNAMODB_TABLE=&lt;terraform-locks-table&gt; \\\n   -e DB_PASSWORD=&quot;mygreatpasswd&quot; \\\n   --link postgres:db \\\n   camptocamp/terraboard:latest</code>\n        </deckgo-highlight-code>\n<p>A Rancher template is also available in <a href=\"https://github.com/camptocamp/camptocamp-rancher-catalog\">Camptocamp's Rancher Catalog</a>, as well as a <a href=\"https://github.com/camptocamp/charts/tree/master/terraboard\">Helm Chart</a>.</p>\n<h3>I want to help!</h3>\n<p>Terraboard is an open-source project and we heartily welcome all contributions to it. Don't hesitate to submit <a href=\"https://github.com/camptocamp/terraboard\">Pull Requests on GitHub</a>.</p>\n<p>You are also welcome to <a href=\"https://gitter.im/camptocamp/terraboard\">join us on Gitter</a> to discuss new ideas.</p>\n<p>Happy Terraforming!</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/keep-an-eye-on-your-terraform-states-4lf5\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/march-cloud-native-romandie-meetup-o2f/","relativePath":"posts/march-cloud-native-romandie-meetup-o2f.md","relativeDir":"posts","base":"march-cloud-native-romandie-meetup-o2f.md","name":"march-cloud-native-romandie-meetup-o2f","frontmatter":{"title":"March Cloud Native Romandie Meetup","stackbit_url_path":"posts/march-cloud-native-romandie-meetup-o2f","date":"2021-04-01T13:44:49.095Z","excerpt":"The last Cloud Native Romandie Meetup took place on March 25th","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--md-eQ-xl--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cf6t3h6z8shvv26x7tqr.png","comments_count":0,"positive_reactions_count":0,"tags":["meetup","cloudnative","kubernetes","cicd"],"canonical_url":"https://dev.to/camptocamp-ops/march-cloud-native-romandie-meetup-o2f","template":"post"},"html":"<p>Last week, we organized our last Cloud Native Romandie Meetup. Due to the current situation, this was an online event like the previous occurrences.</p>\n<p>The meetup was recorded and can be viewed again on YouTube.</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/youtube?args=21DZ6hD97kM\" style=\"border: 0; width: 100%;\"></iframe>\n<h1>Subjects</h1>\n<p>For this edition, we had presentations by <a href=\"https://www.cloudbees.com/\">CloudBees</a>, <a href=\"https://exoscale.com\">Exoscale</a>, and <a href=\"https://camptocamp.com\">Camptocamp</a>.</p>\n<h2>CloudBees CI</h2>\n<p><a href=\"https://www.linkedin.com/in/fredericgibelin/\">Frédéric Gibelin</a> from <a href=\"https://www.cloudbees.com/\">CloudBees</a> presented <a href=\"https://docs.cloudbees.com/docs/cloudbees-ci/latest/\">CloudBees CI</a>, a solution that helps users scale their Jenkins Enterprise platform in the Cloud.</p>\n<p><a href=\"https://drive.google.com/file/d/1u_pYAMA562a7Rzs2B-4XBImn6WsGHLpn/view\">See the slides</a></p>\n<h2>Exoscale SKS</h2>\n<p>Next <a href=\"https://twitter.com/pyr\">Pierre-Yves Ritschard</a> and <a href=\"https://twitter.com/_mcorbin\">Mathieu Corbin</a> from <a href=\"https://exoscale.com\">Exoscale</a> presented <a href=\"https://community.exoscale.com/documentation/sks/\">SKS</a>, a Kubernetes as-a-Service, operating the user’s cluster on their behalf and taking care of the underlying infrastructure.</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/speakerdeck?args=080c0fc92a8a4d37b5d4ef02eb590d19\" style=\"border: 0; width: 100%;\"></iframe>\n<h2>Camptocamp DevOps Stack</h2>\n<p>To finish, <a href=\"https://twitter.com/raphink\">Raphaël Pinson</a> presented Camptocamp's <a href=\"https://devops-stack.io\">DevOps Stack</a> project,  a framework to deploy a standardized Kubernetes platform and its ecosystem, using a GitOps approach.</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/slideshare?args=j0zNBoH48aFEO2\" style=\"border: 0; width: 100%;\"></iframe>\n<h1>Future Meetups</h1>\n<p>Don’t miss any more and join the <a href=\"https://www.meetup.com/Cloud-Native-Romandie\">Cloud Native Romandie meetup group</a>. This way, you can be part of a local community and stay up-to-date on different Cloud Native technologies.</p>\n<p>We look forward to meeting and exchanging with you during our next virtual meetup scheduled for Thursday, June 17th.</p>\n<p>Also, if you are keen to present a technology or you would like to see more of a technology, please let us know, we would be happy to support your interest. If it interests you, it also interests the community.</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/march-cloud-native-romandie-meetup-o2f\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/open-source-standards-and-technical-debt-2g1/","relativePath":"posts/open-source-standards-and-technical-debt-2g1.md","relativeDir":"posts","base":"open-source-standards-and-technical-debt-2g1.md","name":"open-source-standards-and-technical-debt-2g1","frontmatter":{"title":"Open Source, Standards, and Technical Debt","stackbit_url_path":"posts/open-source-standards-and-technical-debt-2g1","date":"2021-02-03T11:10:41.220Z","excerpt":"As software needs evolve, technological evolution implies Technical Debt. Open Source can help mitigate Technical Debt by influencing on standards.","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--OoTHD5Xo--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/a1ozenortfr4pe0d1vad.png","comments_count":3,"positive_reactions_count":4,"tags":["devops","agile","opensource","productivity"],"canonical_url":"https://www.camptocamp.com/en/news-events/open-source-standards-and-technical-debt","template":"post"},"html":"<p>Twenty years ago, Camptocamp was a pioneer company in Open Source adoption. Nowadays, <a href=\"https://ieeexplore.ieee.org/document/8880574\">Open Source has become mainstream</a> and the vast majority of the industry agrees on the many benefits of its practices. In fact, the Open Source model has become a <em>de facto</em> standard in some fields such as Web Frontend development.</p>\n<p>Many companies make an increasing use of Open Source software in their infrastructure and development stacks, and there are countless proven reasons for doing so, such as standard formats or <a href=\"https://www.forbes.com/sites/martenmickos/2018/09/26/why-openness-is-the-greatest-path-to-security/?sh=567640cf5f7f\">security by openness</a>, to name just a couple.</p>\n<p>In spite of these benefits, companies openly contributing —let alone Open Sourcing their own projects— are still somehow not very common, and most firms think of Open Source purely as a consumer’s benefit.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/dqdagbzcqm5kgkbcx4qz.png\" alt=\"Open Source Community | © Shutterstock\"></p>\n<h2>So why should you contribute to Open Source software?</h2>\n<p>For years, I used to think the best argument in favor of contributing to existing projects was maintenance and compatibility. If I fork a project and add functionality to it, there is a risk that my changes will become increasingly hard to maintain as time goes by. If the core developers of the program are aware of my changes and actively intend to go along with them, this risk will be greatly reduced.</p>\n<p>So contributing my changes ensures they will stay compatible with the base code as time goes by. There might even be improvements to my code if more people encounter a similar need in the future, and decide to build on top of my changes.</p>\n<p>Today, however, I believe the example I have just given is a specific case of a more general rule, which encompasses more pragmatic reasons to contribute code as Open Source. This more general context is linked to the concept of <a href=\"https://www.linuxfoundation.org/en/resources/publications/solving-technical-debt-with-open-source/\">Technical Debt</a>: the idea that technical decisions imply a hidden cost (a “debt”) that will have to be paid in the future in order to catch up with state-of-the-art technology.</p>\n<h2>So how do I minimize the debt?</h2>\n<p>Minimizing technical debt is a vast —and at times conflicting— subject. However, I think it is safe to assume that one way to reduce its risk is to stick to standards. The closer a project sticks to industry standards, the less likely it will have to be ported to another technological stack in the foreseeable future.</p>\n<h2>What if the standards don’t meet my needs?</h2>\n<p>When faced with a missing feature, most people’s reflex might be to start building a specific component to meet their use case. In the words of <a href=\"https://hiredthought.com/2018/09/01/intro-to-wardley-mapping/\">Strategy Theorist Simon Wardley</a>, they’ll be shifting this component to the Genesis stage, making it more unpredictable —or even erratic—, less standard, and thus more prone to building up technical debt in time.</p>\n<p>There is another way though. If my need is not met, and it is in fact a valid need (which is a very important question to ask in the first place), then other people might have this need in the future. When they do, someone, somewhere, will create a new standard for this need. When this new standard becomes enforced, then will my specific component’s debt become obvious.</p>\n<p>So what if, instead of building a specific component to make up for the lack of standard, I set the new standard myself? Open Source lets you do just that! It gives you the opportunity to be the first one providing an open implementation to a generic need, and the chance to make it into the new standard. If that new standard catches on, you have not only solved your problem, but you also haven’t accumulated technical debt. In fact, you’re ahead of the other users, because you set the new standard.</p>\n<h2>Wait, we’re no FAANG!</h2>\n<p>Obviously, the majority of organizations can't afford to have engineers focusing on IETF RFCs or moving ISO standards to fit our needs.</p>\n<p>However, a standard doesn't have to be that complicated. Let’s say I use this popular CLI tool, but I need to specify an option which doesn’t exist yet. I could hack something around the generation of its configuration file to produce the options I need. Or I could patch that tool and add a new flag for my needs, and contribute that change back to the project. Chances are, if I need this option, some else does too.</p>\n<p>Now, every time someone has the need for that option, they’ll be using my new flag. I’ve contributed a new standard, and I haven’t made any technical debt on my side.</p>\n<p>It’s not the size of the steps that matters, it’s really the direction in which you take them.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/8z3stx204q4gut3w4coq.png\" alt=\"Start with Open Source | © Shutterstock\"></p>\n<h2>Where do I start?</h2>\n<p>Open Source is not just a philosophy. It encompasses licensing issues, technology standards, culture, and much more.</p>\n<p>At Camptocamp, we’ve been committed to the Open Source approach for years.</p>\n<p>This means we have a habit of solving problems in generic terms and building new standards.</p>\n<p>It also means we have contacts in many Open Source communities, which allow us to brainstorm ideas and quickly contribute to projects, ensuring a fast feedback loop on our work.</p>\n<p>When we implement Open Source software for our clients, we actively seek to limit technological debt. Because we believe in a world of standards, we don’t want our clients to feel entirely stuck with a technological stack in the future. Or even with us!</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/open-source-standards-and-technical-debt-2g1\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/recognizing-faces-in-historical-photographs-3ikc/","relativePath":"posts/recognizing-faces-in-historical-photographs-3ikc.md","relativeDir":"posts","base":"recognizing-faces-in-historical-photographs-3ikc.md","name":"recognizing-faces-in-historical-photographs-3ikc","frontmatter":{"title":"Recognizing faces in historical photographs","stackbit_url_path":"posts/recognizing-faces-in-historical-photographs-3ikc","date":"2020-05-03T20:10:58.839Z","excerpt":"Using machine learning to identify people in historical photographs","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--xqA9hriG--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/zkzysi903rs0m6w8rvy1.png","comments_count":0,"positive_reactions_count":6,"tags":["machinelearning","facerecognition","showdev","opensource"],"canonical_url":"https://dev.to/raphink/recognizing-faces-in-historical-photographs-3ikc","template":"post"},"html":"<p>Genealogy has been one of my main personal activities for years. As part of my research, I've collected old pictures of family members that cousins I met were kind enough to send me (usually in scanned form, although at times I was actually given the custody of original photographs).</p>\n<h1>AI to Help with Identification</h1>\n<p>In the last few years, I've added these photographs to Google Photos to take advantage of the face recognition features. It's allowed me to quickly find pictures of people, and it has helped me to identify people in pictures with the help of AI.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/wtowmhpvyc51m79yef5j.png\" alt=\"Google Photos helps me keep track of known people in photographs\">\n<em>Google Photos helps me keep track of known people in photographs</em></p>\n<p>However useful this has been though, I've felt for some time that the scope was too narrow. I've found pictures of my great-grandfather and his associate in books and newspapers, and there's probably more I haven't seen yet.</p>\n<p>Furthermore, there's people in my collection that I haven't identified yet. Somewhere, somehow, I'm sure there's descendants of these people who have portraits of them, and would love to get more pictures of their ancestors. I have occasionally been able to identify them with notes in the back of pictures, but there's still a lot left to put a name on.</p>\n<h1>Thinking Broader</h1>\n<p>So I've been thinking… What if I could have a system similar to Google Photos face grouping and identification, but at a much more global scale?</p>\n<p>The time seems ripe for this:</p>\n<ul>\n<li>we have the algorithms to identify faces</li>\n<li>each new day brings hundreds of new historical pictures online —from family portraits to war pictures</li>\n<li>there's large genealogy databases that associate portraits with identity (<a href=\"http://ancestry.com/\">Ancestry</a>, <a href=\"https://geni.com\">Geni</a>, <a href=\"http://myheritage.com/\">MyHeritage</a>, <a href=\"https://geneanet.org/\">Geneanet</a>, etc.)</li>\n</ul>\n<h1>Starting Point</h1>\n<p>A few months ago, I've started playing with <a href=\"https://aws.amazon.com/rekognition/\">AWS Rekognition</a> to see what I could get out of my own personal collection. Encouraged by the results, I launched a little PoC project, which can be found at:</p>\n<p><a href=\"https://raphink.github.io/find-my-ancestor/\">https://raphink.github.io/find-my-ancestor/</a></p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/79uz5005ebjyex7knfwz.png\" alt=\"President Paul Kruger found by the AI among his family\">\n<em>President Paul Kruger found by the AI among his family</em></p>\n<p>In this project, I picked public Flickr collections featuring historical photographs from all over the world. I scanned a few million pictures and stored face metadata about them in AWS Rekognition. I then built a simple web UI to query this database from a given picture.</p>\n<p>The code (both Ruby scripts and web interface) can be found on GitHub:</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/github?args=raphink%2Ffind-my-ancestor\" style=\"border: 0; width: 100%;\"></iframe>\n<p>I've communicated about this project on various Genealogy websites and groups. Unfortunately, the results were not so great. Apart from celebrities and royalties, it's hard to identify random people in a database of \"only\" a million faces, though I am quite sure the algorithm did identify my great-great-uncle in two pictures from the Boer War.</p>\n<h1>The Vision</h1>\n<p>MyHeritage recently worked with AI developer <a href=\"https://twitter.com/citnaj\">Jason Antic</a> to provide an amazing colorization algorithm.</p>\n<p>Most of these genealogical websites provide some kind of hinting system, which send you regular notifications of:</p>\n<ul>\n<li>historical documents matching the names of people in your tree</li>\n<li>other trees with people matching yours</li>\n<li>DNA matches</li>\n</ul>\n<p>My goal would thus be to provide a new kind of hint, in the form of photographs matching known portraits of people in your tree.</p>\n<p>After giving it some thinking, I'm afraid though that the database I have built in AWK Rekognition won't be of much help. It seems I should be using another kind of algorithm to group faces by known person in order to improve matching.</p>\n<p>I'd love to this see project get somewhere. There's so many people who could be identified… soldiers in WWI/WWII pictures, lost family members in concentration camps, and many other unsolved mysteries…</p>\n<p>Do you AI experts have any tips to help me continue this project?</p>\n<p><em><a href=\"https://dev.to/raphink/recognizing-faces-in-historical-photographs-3ikc\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/representing-technical-skills-on-a-timeline-1mk1/","relativePath":"posts/representing-technical-skills-on-a-timeline-1mk1.md","relativeDir":"posts","base":"representing-technical-skills-on-a-timeline-1mk1.md","name":"representing-technical-skills-on-a-timeline-1mk1","frontmatter":{"title":"Representing technical skills on a timeline","stackbit_url_path":"posts/representing-technical-skills-on-a-timeline-1mk1","date":"2020-05-11T15:19:14.715Z","excerpt":"Several ways to display technical skills on a timeline","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--5QxM9CRu--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/0kwh14busu2ckvvmofwp.png","comments_count":1,"positive_reactions_count":7,"tags":["jquery","latex","showdev","opensource"],"canonical_url":"https://dev.to/raphink/representing-technical-skills-on-a-timeline-1mk1","template":"post"},"html":"<p>CVs and other websites presenting technical skills often lack a time dimension, allowing to know when and for how long a technology has been used.</p>\n<h1>Timeline on CV</h1>\n<p>About 8 years ago, I wanted to add a visual representation of my experience on my PDF CV.</p>\n<p>Since I already used LaTeX with the excellent <a href=\"https://ctan.org/pkg/moderncv\">moderncv class</a>, I wanted the solution to extend on that class. <a href=\"https://tex.stackexchange.com/questions/29725/putting-a-timeline-for-dates-in-moderncv\">TeX StackExchange did not disappoint</a> (they never do) and this gave birth to the <a href=\"https://ctan.org/pkg/moderntimeline\">\n<code>moderntimeline</code>\nLaTeX package</a> which I have been maintaining since.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/4ae3bwwiymnffrljjypc.png\" alt=\"Moderntimeline example\"></p>\n<p>To this day I still use this solution <a href=\"https://github.com/raphink/CV\">on my CV</a>.</p>\n<p>Since then, a template has even been added to <a href=\"https://www.overleaf.com/latex/examples/moderncv-with-modern-timeline/prmmmvtvfxsn\">Overleaf</a> to make it easier!</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/hvcmi91fhd8eaqex8u8c.png\" alt=\"Template on Overleaf\"></p>\n<h1>Technology timeline</h1>\n<p>The CV timeline is still not enough to present the data I wish to display, which is the temporal evolution of technical skills.</p>\n<h2>OpenHub</h2>\n<p>Among the many websites which analyze public code repositories to get metrics out of them, <a href=\"https://www.openhub.net/\">OpenHub</a> (previously Ohloh) is very interesting because it presents a timeline of languages used in projects.</p>\n<p>Here's an example with <a href=\"https://www.openhub.net/accounts/raphink\">my profile</a>, where you can identify clear periods: a lot of LaTeX (dark blue) in the first years (when I edited books), then Augeas (light grey), mostly Ruby (red) between 2012 and 2015, then mainly Go (purple).</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/0kwh14busu2ckvvmofwp.png\" alt=\"OpenHub Languages View\"></p>\n<h2>A broader approach</h2>\n<p>Not every tech skill can be measured with a number of code lines though.\nSo in 2013, I switched <a href=\"https://raphink.info/\">my main CV page</a> to a temporal skills view.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/8ld9549ukxidnb7i7jjv.png\" alt=\"Skills View\"></p>\n<p>This uses <a href=\"https://visjs.org/\">vis.js</a> to build a table of skills from <a href=\"https://github.com/raphink/CV/blob/gh-pages/items.json\">a JSON file</a>, e.g.:</p>\n<deckgo-highlight-code json   highlight-lines=\"\">\n          <code slot=\"code\">[\n  {&quot;id&quot;: &quot;Orange&quot;, &quot;content&quot;: &quot;&lt;img src=&#39;img/orange.png&#39; class=&#39;logo&#39; /&gt;&lt;b&gt;Orange Portails&lt;/b&gt;&lt;br /&gt;Systems Engineer&quot;, &quot;start&quot;: &quot;2006-06-01&quot;, &quot;end&quot;: &quot;2012-03-01&quot;, &quot;type&quot;: &quot;background&quot;, &quot;className&quot;: &quot;orange&quot;},\n  {&quot;id&quot;: &quot;Camptocamp&quot;, &quot;content&quot;: &quot;&lt;img src=&#39;img/camptocamp.png&#39; class=&#39;logo&#39; /&gt;&lt;b&gt;Camptocamp&lt;/b&gt;&lt;br /&gt;Infrastructure Developer&quot;, &quot;start&quot;: &quot;2012-03-01&quot;, &quot;type&quot;: &quot;background&quot;, &quot;className&quot;: &quot;camptocamp&quot;},\n\n  {&quot;group&quot;: &quot;provisioning&quot;, &quot;content&quot;: &quot;Debian FAI&quot;, &quot;start&quot;: &quot;2006-06-01&quot;, &quot;end&quot;: &quot;2012-03-01&quot;, &quot;className&quot;: &quot;contributed&quot;},\n  {&quot;group&quot;: &quot;provisioning&quot;, &quot;content&quot;: &quot;Kickstart&quot;, &quot;start&quot;: &quot;2006-06-01&quot;, &quot;className&quot;: &quot;implemented&quot;},\n  {&quot;group&quot;: &quot;provisioning&quot;, &quot;content&quot;: &quot;Terraform&quot;, &quot;name&quot;: &quot;terraform&quot;, &quot;start&quot;: &quot;2016-05-01&quot;, &quot;className&quot;: &quot;contributed&quot;}\n]</code>\n        </deckgo-highlight-code>\n<p>This JSON file is parsed and displayed on the page. Each skill can be assigned an icon as well as additional information. The skill bar can be clicked to display this information, taken from the\n<code>skills/</code>\ndirectory and <a href=\"https://github.com/raphink/CV/blob/gh-pages/skills/go/details.md\">documented in Markdown</a>.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/xe2j9wktc28leawnu9ee.png\" alt=\"Details View\"></p>\n<p>The code is open-source and can be forked on GitHub. Just check the\n<code>gh-pages</code>\nbranch:</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/github?args=raphink%2FCV%20no-readme\" style=\"border: 0; width: 100%;\"></iframe>\n<p>As usual, pull requests are welcome if you find nice ways to improve this!</p>\n<p><em><a href=\"https://dev.to/raphink/representing-technical-skills-on-a-timeline-1mk1\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/taming-puppetserver-6-pt-ii-garbage-collection-2oh2/","relativePath":"posts/taming-puppetserver-6-pt-ii-garbage-collection-2oh2.md","relativeDir":"posts","base":"taming-puppetserver-6-pt-ii-garbage-collection-2oh2.md","name":"taming-puppetserver-6-pt-ii-garbage-collection-2oh2","frontmatter":{"title":"Taming Puppetserver 6 Pt II: Garbage Collection","stackbit_url_path":"posts/taming-puppetserver-6-pt-ii-garbage-collection-2oh2","date":"2020-05-15T10:49:37.667Z","excerpt":"PuppetServer can be spending a lot of time doing gargage collection, which impacts its performance","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--0QPlyKrh--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/1wtluh7qosm01n29xmgq.png","comments_count":0,"positive_reactions_count":3,"tags":["puppet","observability","java","garbagecollection"],"canonical_url":"https://dev.to/camptocamp-ops/taming-puppetserver-6-pt-ii-garbage-collection-2oh2","template":"post"},"html":"<p>Now that our internal Puppet Infrastructure is <a href=\"https://dev.to/camptocamp-ops/taming-puppetserver-6-a-grafana-story-3c4f\">migrated to Puppet 6 and tuned</a>, it was time to switch a second infra to it.</p>\n<p>Yesterday, I migrated our second infrastructure, and started seeing more issues. The rules-of-thumb from last post were useful, but I still needed to upgrade available memory to make up for a lack of computing power (probably to be imputed to the underlying IaaS throttling virtual CPUs).</p>\n<p>And then, a Puppetserver crashed with a\n<code>GC overhead limit exceeded</code>\nerror. This error happens when the CPU spends more than 98% performing garbage collection.</p>\n<h1>Analyzing Garbage Collection Data</h1>\n<p>Looking at our Grafana dashboard, I realized we had no metrics about garbage collection, so I added a graph with two metrics:</p>\n<ul>\n<li>mean time per GC: the average time taken by each garbage collection request to complete, calculated as a rate over 1 minute (since\n<code>jvm_gc_collection_seconds_sum</code>\nis a cumulative counter)</li>\n<li>GC time: the percentage of time spent by the CPUs doing GC, over 1 minute (since\n<code>jvm_gc_collection_seconds_count</code>\nis also a cumulative counter)</li>\n</ul>\n<p>The formulas are as follow:</p>\n<ul>\n<li></li>\n</ul>\n<p><code>time per request {{gc}}</code>\n:\n<code>rate(jvm_gc_collection_seconds_sum{job=\"puppetserver\"}[1m])/rate(jvm_gc_collection_seconds_count{job=\"puppetserver\"}[1m])</code></p>\n<ul>\n<li></li>\n</ul>\n<p><code>rate {{gc}}</code>\n:\n<code>rate(jvm_gc_collection_seconds_sum{job=\"puppetserver\"}[1m])</code></p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/euwyfd4972ggylucm4n3.png\" alt=\"Graph queries\"></p>\n<h1>GC time</h1>\n<p>I then looked at the graphs around the time when the\n<code>GC overhead limit exceeded</code>\nerror happened:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/w6k51xxcgoucexyieap3.png\" alt=\"GC overhead limit exceeded\"></p>\n<p>Yes, I had a problem indeed. I restarted the Puppetservers and this hasn't happened since. However, the rates for PS MarkSweep have kept pretty high still. Here's the last 15 minutes as I'm writing:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/y93wzup9e7wglf4ectng.png\" alt=\"Standard activity for Puppet Infra 2\"></p>\n<p>In comparison, the infrastructure I upgraded last week is faring much better, with GC rates well under 10%:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/466p0q75k2irwheuze22.png\" alt=\"Standard activity for Puppet Infra 1\"></p>\n<h1>Mean time per GC</h1>\n<p>In addition to a high rate spent performing garbage collection on the PS MarkSweep GC threads, I also noticed that the mean GC time for PS MarkSweep is pretty high, too, at around 2 to 3s. The values are slightly lower (a bit under 2s) on my first infrastructure.</p>\n<h1>Getting rid of PS MarkSweep</h1>\n<p>All in all, it seems PS MarkSweep garbage collection is to blame. It tends to take lots of CPU, for long periods of time.</p>\n<p>The good news is: <a href=\"https://stackoverflow.com/questions/39929758/ps-marksweep-is-which-garbage-collector/44923227#%2044923227\">PS MarkSweep is a legacy garbage collector</a>, and it's not too hard to get rid of it, since <a href=\"https://blog.idrsolutions.com/2019/05/java-8-vs-java-11-what-are-the-key-changes/\">OpenJDK 11 replaces it with the G1 Young Generation garbage collector by default</a>.</p>\n<p>The <a href=\"https://hub.docker.com/r/puppet/puppetserver\">official puppetserver Docker image</a> installs the\n<code>puppetserver</code>\npackage, which pulls\n<code>openjdk-8-jre-headless</code>\nas a dependency. <a href=\"https://puppet.com/docs/puppetserver/latest/install_from_packages.html#%20java-support\">OpenJDK 11 is also officially supported</a> starting with Puppet 6.6, but the package doesn't allow to install it instead of OpenJDK 8. So for now, I'll just derive an image and install\n<code>openjdk-11-jre-headless</code>\nin addition to OpenJDK8 and let Ubuntu update the alternative for Java automatically.</p>\n<p>The following graph shows the difference in GC time between PS MarkSweep and G1, following the upgrade to OpenJDK 11:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/1wtluh7qosm01n29xmgq.png\" alt=\"PS MarkSweep vs G1\"></p>\n<p>And here's what GC looks like after a good warm up:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/hvr49m18eeidpxt31u7k.png\" alt=\"New GC graph\"></p>\n<p>From 2s to 80ms, that's a great improvement if you ask me!</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/taming-puppetserver-6-pt-ii-garbage-collection-2oh2\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/tracing-x-to-my-4th-great-grandmother-2af9/","relativePath":"posts/tracing-x-to-my-4th-great-grandmother-2af9.md","relativeDir":"posts","base":"tracing-x-to-my-4th-great-grandmother-2af9.md","name":"tracing-x-to-my-4th-great-grandmother-2af9","frontmatter":{"title":"Tracing X to my 4th great-grandmother","stackbit_url_path":"posts/tracing-x-to-my-4th-great-grandmother-2af9","date":"2020-06-03T22:23:33.398Z","excerpt":"X chromosomes have a specific inheritance pattern which often allow to narrow family branches when looking for relationships hypothesis","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--n-mB5MIa--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://upload.wikimedia.org/wikipedia/commons/thumb/5/59/Ideogram_house_mouse_chromosome_X.svg/1280px-Ideogram_house_mouse_chromosome_X.svg.png","comments_count":0,"positive_reactions_count":2,"tags":["dna","genealogy","dnapainting"],"canonical_url":"https://dev.to/raphink/tracing-x-to-my-4th-great-grandmother-2af9","template":"post"},"html":"<p>DNA tests are fun. They can give you a hint on your origins (though the results depend a lot on the data sets from the company providing them), get you in touch with cousins (or even closer relatives) you didn't know about, confirm genealogy hypothesis, and much more...</p>\n<p>One thing that is interesting to do with DNA results is to trace known DNA segments to a known ancestor. This is not always possible, and usually requires several triangulated matches on that segment to ensure where it comes from.</p>\n<h1>X chromosome inheritance</h1>\n<p>In my DNA results, I have a rather large match on my X chromosome. This match is about 40cM long (for a total length of about <a href=\"https://isogg.org/wiki/CentiMorgan#%20cm_values_per_chromosome\">196cM at FamilyTreeDNA</a>).</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/690rgl67j0nnunn5shm0.png\" alt=\"A 40cM match on X\"></p>\n<p>Where could this segment have come from? Is there a way to tell?</p>\n<p>In autosomal tests, the X chromosome is often included, but it plays by\nits own rules. This is because X is transmitted in a funny way, due to\nthe fact that males only have one X chromosome (and one Y chromosome),\nwhile females have two (and no Y chromosome).</p>\n<p>Here's an illustration <a href=\"https://en.wikipedia.org/wiki/X_chromosome#%20Inheritance_pattern\">from Wikipedia</a> explaining how this affects the inheritance rules of that special chromosome:</p>\n<p><img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/e/ed/X_chromosome_ancestral_line_Fibonacci_sequence.svg/1920px-X_chromosome_ancestral_line_Fibonacci_sequence.svg.png\" alt=\"X chromosome inheritance\"></p>\n<p>As you can see, the consequence is pretty simple:</p>\n<ul>\n<li>men get their only X chromosome from their mother (I'm leaving aside the very rare XXY cases)</li>\n<li>women get one X chromosome from each of their parents.</li>\n</ul>\n<p>As a result, a segment of an X chromosome can travel through both men and women (unlike segments on the Y chromosome, which travel only through men), but can never be carried by two men in a row.  </p>\n<p>In other words, since I am a man, I cannot have gotten this segment from:</p>\n<ul>\n<li>my father (or his side ancestors, obviously)</li>\n<li>my grand-father's father</li>\n<li>my grand-mother's father's father</li>\n<li>or any other relationship involving a father and his son</li>\n</ul>\n<p>Now I'm quite lucky, as I happen to know the person I match on 40cM on that X chromosome, even though the match is quite distant:</p>\n<ul>\n<li>First off, that person is Jewish, an ethnic group that matches my mother's side exclusively. Check ✅</li>\n<li>Then, this person comes from a very special family branch from Egypt, which happens to be on my grand-father's mother's side. Check again ✅</li>\n<li>I don't know any other possible (close enough) relation with that person that could explain this X match, and we both have quite developed family trees ✅</li>\n</ul>\n<p>So, unless a closer path can be found in the future (which is unlikely), the inheritance hypothesis checks with both sides of the tree, confirming the chain of ancestors:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/yd2eyu0ihscnz7emcd51.png\" alt=\"Tracing the X segment\"></p>\n<p>You can see how that X segment was transmitted from my 4th great-grandmother Luna Mondolfo (born around 1790) all the way to myself (on the left) and my DNA match (on the right).</p>\n<p>Once this is confirmed, it also allows to consider that any other match triangulating on this X segment is more likely linked to this side of the family, known and verified until the 18th century.</p>\n<h1>Keeping track of segments</h1>\n<p>A great way of tracing known DNA segments is to use <a href=\"https://dnapainter.com/\">DNA Painter</a>, a free website where you can gather segment information from various companies (FTDNA, Ancestry, GedMatch, MyHeritage, etc.) to \"paint\" your chromosomes.</p>\n<p>Here's an example showing my maternal labels, with chromosome X at the bottom:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/vejbspsiy6tmsg8wcpu1.png\" alt=\"DNA Painter\"></p>\n<p>Did you make interesting finds in DNA tests as well?</p>\n<p><em><a href=\"https://dev.to/raphink/tracing-x-to-my-4th-great-grandmother-2af9\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/unshallowing-a-git-repository-24nd/","relativePath":"posts/unshallowing-a-git-repository-24nd.md","relativeDir":"posts","base":"unshallowing-a-git-repository-24nd.md","name":"unshallowing-a-git-repository-24nd","frontmatter":{"title":"Unshallowing a Git repository","stackbit_url_path":"posts/unshallowing-a-git-repository-24nd","date":"2020-05-08T07:36:38.220Z","excerpt":"GitLab allows to perform shallow repository clones (and it seems to be the default in recent versions...","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--nIHJww2i--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/nsbbm80zgqqypxyqtx1d.png","comments_count":0,"positive_reactions_count":1,"tags":["devops","git","cicd","todayilearned"],"canonical_url":"https://dev.to/camptocamp-ops/unshallowing-a-git-repository-24nd","template":"post"},"html":"<p>GitLab allows to <a href=\"https://docs.gitlab.com/ee/ci/yaml/#%20shallow-cloning\">perform shallow repository clones</a> (and it seems to be the default in recent versions from what I can tell).</p>\n<p>In order to run r10k, I need a full repository though, because r10k will copy it to cache and use this copy as a reference. This is what happens when you use a shallow repository:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\"> [2020-05-08 06:53:15 - DEBUG] Replacing /etc/puppetlabs/code/environments/modulesync_update and checking out modulesync_update\n [2020-05-08 06:53:56 - ERROR] Command exited with non-zero exit code:\n Command: git clone /builds/camptocamp/is/puppet/puppetmaster-c2c /etc/puppetlabs/code/environments/modulesync_update --reference /etc/puppetlabs/code/cache/-builds-camptocamp-is-puppet-puppetmaster-c2c\n Stderr:\n Cloning into &#39;/etc/puppetlabs/code/environments/modulesync_update&#39;...\n fatal: reference repository &#39;/etc/puppetlabs/code/cache/-builds-camptocamp-is-puppet-puppetmaster-c2c&#39; is shallow\n Exit code: 128\n [2020-05-08 06:53:56 - DEBUG] Purging unmanaged environments for deployment...</code>\n        </deckgo-highlight-code>\n<p>Git provides a\n<code>fetch --unshallow</code>\ncommand which solves the problem, so we just need to run\n<code>git fetch --unshallow</code>\nin the repository before running r10k.</p>\n<p>However, some of our (older) GitLab installs don't make shallow clones. Instead, they make full clones with a single detached branch, so we need\n<code>fetch --all</code>\ninstead.</p>\n<p>In order to have it work in all configurations, I'm ending up running:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">git fetch --unshallow || git fetch --all</code>\n        </deckgo-highlight-code>\n<p>And then run r10k on the repository.</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/unshallowing-a-git-repository-24nd\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/automated-puppet-impact-analysis-1c1/","relativePath":"posts/automated-puppet-impact-analysis-1c1.md","relativeDir":"posts","base":"automated-puppet-impact-analysis-1c1.md","name":"automated-puppet-impact-analysis-1c1","frontmatter":{"title":"Automated Puppet Impact Analysis","stackbit_url_path":"posts/automated-puppet-impact-analysis-1c1","date":"2020-05-07T20:49:52.016Z","excerpt":"Using GitLab Pipelines and Catalog Diff to preview changes between two branches in a merge request","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--X2kztkXc--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://www.camptocamp.com/wp-content/uploads/xformations_puppet1-720x400.png.pagespeed.ic.UU2oY1Zlj8.webp","comments_count":4,"positive_reactions_count":9,"tags":["puppet","devops","codequality","showdev"],"canonical_url":"https://dev.to/camptocamp-ops/automated-puppet-impact-analysis-1c1","template":"post"},"html":"<p>In <a href=\"https://dev.to/camptocamp-ops/diffing-puppet-environments-1fno\">last week's post</a>, I presented how to set up <a href=\"https://github.com/camptocamp/puppet-catalog-diff\">Puppet Catalog Diff</a> to diff between two Puppet environments.</p>\n<p>Wouldn't it be great if this tool could be used to perform automatic impact analysis before merging a Git branch (aka Merge Request or Pull Request)? Well, it can.</p>\n<h1>The Setup</h1>\n<p>Our current set up is based on <a href=\"https://www.openshift.com/\">RedHat OpenShift</a> and <a href=\"https://gitlab.com/\">GitLab</a>.\nThis is however easily portable to other installation choices.</p>\n<h2>Puppet Infrastructure</h2>\n<p>The Puppet infrastructure is currently running in OpenShift, using our series of <a href=\"https://github.com/camptocamp/charts\">Puppet Helm Charts</a> for Puppetserver, PuppetDB, Puppetboard and Puppet Catalog Diff Viewer.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/dn3718sndgu4zyrtgklv.png\" alt=\"Puppet-related Pods\"></p>\n<p>We are in the process of migrating from Puppet 5 to Puppet 6, so we currently have two Puppetserver charts deployed, one for each version. The\n<code>puppetserver</code>\nservice points to two Puppet 5 pods, while the\n<code>puppetserver6</code>\nservice points to two Puppet 6 pods.</p>\n<p>We have passthrough OpenShift routes sitting in front of the services to expose them to the rest of the infra (on port 443 instead of 8140).</p>\n<h2>Lint and Deployment</h2>\n<p>Puppet code deployment is done using a GitLab Runner chart whose deployment mounts the Puppetcode volume (PVC from the Puppetserver deployment). We then run r10k in a GitLab pipeline every time a branch is pushed.</p>\n<p>We also lint the code before deploying it, using the <a href=\"https://github.com/declarativesystems/onceover-codequality\">Onceover Code Quality plugin</a>.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/t6o39hw6zpxdq27uhhmc.png\" alt=\"Deployment pipeline\"></p>\n<p>Here's what it looks like in\n<code>.gitlab-ci.yml</code>\n:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">---\nstages:\n  - lint\n  - deploy\n\n.create_r10k_yaml: &amp;create_r10k_yaml |\n  cat &lt;&lt; EOF &gt; /tmp/r10k.yaml\n  ---\n  :cachedir: /etc/puppetlabs/code/cache\n\n  :sources:\n    :main:\n      remote: $CI_PROJECT_DIR\n      basedir: /etc/puppetlabs/code/environments\n  EOF\n\nlinting-puppet-hiera:\n  image: camptocamp/onceover-codequality:latest\n  stage: lint\n  script:\n    - &#39;onceover run codequality  --no_docs&#39;\n  tags:\n    - puppetmaster\n  rules:\n    # Skip linting if the commit message contains &quot;[skip lint]&quot;\n    - if: &#39;$CI_COMMIT_MESSAGE !~ /\\[skip lint\\]/&#39;\n\nr10k-deploy:\n  image: puppet/r10k:3.1.0\n  stage: deploy\n  tags:\n    # Select GitLab runner from the Puppet OpenShift env (which mounts Puppetcode)\n    - puppetmaster\n  before_script:\n    - while [ -f /etc/puppetlabs/code/r10k.lock ]; do echo -n &quot;Waiting for lock from &quot;; cat /etc/puppetlabs/code/r10k.lock || echo; sleep 2; done\n    - hostname -f &gt; /etc/puppetlabs/code/r10k.lock\n  script:\n    - umask 0002\n    # Git https secrets are mounted in the GitLab runner\n    - ln -s /secrets/.netrc ~/\n    - *create_r10k_yaml\n    - git fetch --unshallow\n    - &#39;git branch -r | grep -v &quot;\\-&gt;&quot; | while read remote; do git branch --track &quot;${remote# origin/}&quot; &quot;$remote&quot;; done&#39;\n    - r10k deploy --color -c /tmp/r10k.yaml environment ${CI_COMMIT_REF_NAME} -p --verbose=debug\n    - puppet generate types --environment ${CI_COMMIT_REF_NAME}\n  after_script:\n    - rm -f /etc/puppetlabs/code/r10k.lock</code>\n        </deckgo-highlight-code>\n<h2>Catalog Diff</h2>\n<p>When a Merge Request is open, we want to analyse the impact it will have before we can merge it. This is where Catalog Diff plays a big role.</p>\n<p>Unless you have a huge Puppet infrastructure, Catalog Diff is quite heavy to launch, as it will request lots of catalogs in a small amount of time.</p>\n<p>The new\n<code>--old_catalog_from_puppetdb</code>\noption introduced in version 1.7.0 reduces the load by half by getting the \"from\" catalogs from PuppetDB, but it's still kind of a large batch of requests to the Puppet servers.</p>\n<p>For this reason, we run Catalog Diff only on demand, as a manual task. Lint and Deploy are run a second time, to make them mandatory passing steps before a merge can be validated.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/xlzb38uvg70hndubvrze.png\" alt=\"MR Pipeline\"></p>\n<p>Here's the setup:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">.create_puppetdb_conf: &amp;create_puppetdb_conf |\n  cat &lt;&lt; EOF &gt; /etc/puppetlabs/puppet/puppetdb.conf\n  [main]\n  server_urls = https://puppetdb:8081\n  EOF\n\n.create_csr_attributes_yaml: &amp;create_csr_attributes_yaml |\n  cat &lt;&lt; EOF &gt; /etc/puppetlabs/puppet/csr_attributes.yaml\n  ---\n  custom_attributes:\n    # Our autosign script uses hashed secrets based on a psk,\n    # the certname and the environment coded in the certificate\n    1.2.840.113549.1.9.7: &#39;$(echo -n &quot;$psk/$(puppet config print certname)/production&quot; | openssl dgst -binary -sha256 | openssl base64)&#39;\n  extension_requests:\n    # We use the pp_authorization=catalog extension to set up auth.conf for v4/catalog\n    1.3.6.1.4.1.34380.1.3.1: &#39;catalog&#39;\n    1.3.6.1.4.1.34380.1.1.12: &#39;production&#39;\n  EOF\n\n.cleanup_cert: &amp;cleanup_cert |\n  curl -s -X  DELETE \\\n  &quot;Accept:application/json&quot; -H &quot;Content-Type: text/pson&quot; \\\n  --cacert &quot;/etc/puppetlabs/puppet/ssl/certs/ca.pem&quot; \\\n  --cert &quot;/etc/puppetlabs/puppet/ssl/certs/$(puppet config print certname).pem&quot; \\\n  --key &quot;/etc/puppetlabs/puppet/ssl/private_keys/$(puppet config print certname).pem&quot; \\\n  &quot;https://puppetserver:8140/puppet-ca/v1/certificate_status/$(puppet config print certname)?environment=production&quot;\n\n\ncatalog-diff:\n  image: puppet/puppet-agent:6.15.0\n  stage: diff\n  tags:\n    # Select GitLab runner in Puppet OpenShift env to get direct access to services\n    - puppetmaster\n  script:\n    - apt update\n    - apt install -y locales puppetdb-termini\n    - locale-gen en_US.UTF-8\n    - *create_puppetdb_conf\n    - *create_csr_attributes_yaml\n    # Generate a certificate and get it signed\n    - puppet ssl submit_request --ca_server puppetserver --certificate_revocation=false\n    # We currently diff with puppetserver6 for the migration\n    - puppet catalog --environment ${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME} --certificate_revocation=false diff puppetserver:8140/${CI_MERGE_REQUEST_TARGET_BRANCH_NAME} puppetserver6:8140/${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME} --show_resource_diff --changed_depth 1000 --content_diff --old_catalog_from_puppetdb --certless --threads 4 --output_report /catalog-diff/mr_${CI_MERGE_REQUEST_IID}_${CI_JOB_ID}.json\n  after_script:\n    # We have configured our auth.conf to allow nodes to clean their own cert, see https://dev.to/camptocamp-ops/automatic-renewal-of-puppet-certificates-28pm\n    - *cleanup_cert\n    - echo &quot;You can view the report details at https://puppetdiff.example.com/?report=mr_${CI_MERGE_REQUEST_IID}_${CI_JOB_ID}&quot;\n    # Post a comment on the Merge Request\n    - &#39;curl -k -X POST -H &quot;Private-Token: $CI_BOT_TOKEN&quot; -d &quot;body=You can view the Catalog Diff report details at https://puppetdiff.example.com/?report=mr_${CI_MERGE_REQUEST_IID}_${CI_JOB_ID}&quot; $CI_API_V4_URL/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/notes&#39;\n  # Allow failure so the Merge Request can be validated even without catalog diff\n  allow_failure: true\n  rules:\n    - if: &#39;$CI_MERGE_REQUEST_ID&#39;\n      when: manual\n  variables:\n    LANG: en_US.UTF-8\n    LC_ALL: en_US.UTF-8</code>\n        </deckgo-highlight-code>\n<p>A few notes on that setup:</p>\n<ol>\n<li>\n<p>PuppetDB is accessed via SSL. Since we have valid certificates to access the Puppet server, we might as well, but 8080 is ok as well if you have that possibility.</p>\n</li>\n<li>\n<p>We use an <a href=\"https://puppet.com/docs/puppet/latest/ssl_autosign.html#%20enabling-policy-based-autosigning\">autosign script</a> to sign certificates using a PSK (which we hash). If it's easier for you, you could also inject a valid key and certificate into the build instead of a PSK.</p>\n</li>\n<li>\n<p>If you don't generate a certificate, you don't need the cleanup step either.</p>\n</li>\n<li>\n<p>The reports are saved to the\n<code>/catalog-diff</code>\ndirectory, which is mounted in the runner from the Puppet Catalog Diff Viewer PVC. This way, reports are accessible directly in the viewer by passing their name in the query string.</p>\n</li>\n<li>\n<p>The Merge Request curl request requires passing a\n<code>CI_BOT_TOKEN</code>\nvariable to the build. We currently set one in the build variables, using a robot GitLab account. If you have a GitLab Silver or greater plan, you can use the\n<code>CI_JOB_TOKEN</code>\nvariable instead.</p>\n</li>\n</ol>\n<h2>What does it look like?</h2>\n<p>Here are some screenshots of a typical workflow.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/b7djk6oti87cf1iatsap.png\" alt=\"Validated Merge Request with comment\"></p>\n<p><em>The Merge Request validated, with the comment left by the bot after the Catalog Diff build was run (see the 3 steps on line 3)</em></p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/a87ep2w72mdbvkrdfsv8.png\" alt=\"Puppet Catalog Diff Viewer\"></p>\n<p><em>Viewing the report generated by the Puppet Catalog Diff run</em></p>\n<h2>Demo</h2>\n<p>Here's a video demo of the setup described above:</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/youtube?args=6LOaHsQDsiI\" style=\"border: 0; width: 100%;\"></iframe>\n<h2>In summary</h2>\n<p>This set up allows us to:</p>\n<ul>\n<li>Validate code quality (lint) before deploying environments</li>\n<li>Check which changes will be brought to Puppet catalogs before accepting a Merge Request</li>\n</ul>\n<p>As stated in the previous blog post, this doesn't account for every change, since changes in plugins (facts, types &#x26; providers, Augeas lenses, etc.) can also impact servers but won't be seen in catalog diffs.</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/automated-puppet-impact-analysis-1c1\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/cleaning-up-puppet-code-4da2/","relativePath":"posts/cleaning-up-puppet-code-4da2.md","relativeDir":"posts","base":"cleaning-up-puppet-code-4da2.md","name":"cleaning-up-puppet-code-4da2","frontmatter":{"title":"Cleaning up Puppet Code","stackbit_url_path":"posts/cleaning-up-puppet-code-4da2","date":"2020-04-29T10:54:01.869Z","excerpt":"Code quality is important to ensure style consistency and easy maintenance. Puppet-lint, Onceover and puppet-ghostbuster help ensure Puppet code quality.","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--X2kztkXc--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://www.camptocamp.com/wp-content/uploads/xformations_puppet1-720x400.png.pagespeed.ic.UU2oY1Zlj8.webp","comments_count":1,"positive_reactions_count":7,"tags":["puppet","devops","codequality","opensource"],"canonical_url":"https://www.camptocamp.com/actualite/cleaning-up-puppet-code/","template":"post"},"html":"<p>After months and years of using <a href=\"https://puppet.com/\">Puppet</a>, the code base becomes increasingly complex and cluttered. How can you ensure its quality, as well as clean up unused code?</p>\n<h1>puppet-lint</h1>\n<p>In the Puppet world, <a href=\"http://puppet-lint.com/\">puppet-lint</a> is the reference for code quality. It is used as a standard to check that modules follow the <a href=\"https://puppet.com/docs/puppet/5.5/style_guide.html\">style guide</a>, ensuring consistency in coding style and practices. puppet-lint can also be used in your control repository, to check your private modules (such as <a href=\"https://puppet.com/docs/pe/latest/the_roles_and_profiles_method.html\">roles &#x26; profiles</a>).</p>\n<p>There's at least three ways of achieving this: using <a href=\"https://puppet.com/docs/pdk/1.x/pdk.html\">PDK</a>, a\n<code>Rakefile</code>\n, or <a href=\"https://github.com/dylanratcliffe/onceover\">onceover</a> along with its <a href=\"https://github.com/declarativesystems/onceover-codequality\">code quality plugin</a>.</p>\n<h2>PDK</h2>\n<p><a href=\"https://puppet.com/docs/pdk/1.x/pdk.html\">PDK</a>, the standard tool to manage Puppet modules in a standard way, also works with control repositories. Once <a href=\"https://puppet.com/docs/pdk/1.x/pdk_install.html\">installed</a>, you can convert your control repository and run validation tests with:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ pdk convert\n$ pdk validate</code>\n        </deckgo-highlight-code>\n<h2>Rakefile method</h2>\n<p>The\n<code>Rakefile</code>\nmethod is <a href=\"https://github.com/rodjek/puppet-lint#%20testing-with-puppet-lint-as-a-rake-task\">a easy way</a> to automate\n<code>puppet-lint</code>\n:</p>\n<deckgo-highlight-code ruby   highlight-lines=\"\">\n          <code slot=\"code\">Rake::Task[:lint].clear\nPuppetLint::RakeTask.new :lint do |config|\n  config.ignore_paths = [\n    &#39;modules/**/*.pp&#39;,\n    &#39;vendor/**/*&#39;,\n  ]\n  config.disable_checks = [\n    &#39;80chars&#39;,\n    &#39;documentation&#39;,\n  ]\n  config.fail_on_warnings = true\n  config.fix = true if ENV[&#39;PUPPETLINT_FIX&#39;] == &#39;yes&#39;\nend</code>\n        </deckgo-highlight-code>\n<p>Add a\n<code>Gemfile</code>\nin your repository to install\n<code>puppet-lint</code>\n:</p>\n<deckgo-highlight-code ruby   highlight-lines=\"\">\n          <code slot=\"code\">source ENV[&#39;GEM_SOURCE&#39;] || &quot;https://rubygems.org&quot;\n\ngroup :development, :test do\n  gem &#39;rake&#39;,                                             :require =&gt; false\n  gem &#39;puppet-lint&#39;,                                      :require =&gt; false\n  \n  # Other lint plugins (optional)\n  gem &#39;puppet-lint-spaceship_operator_without_tag-check&#39;, :require =&gt; false\n  gem &#39;puppet-lint-unquoted_string-check&#39;,                :require =&gt; false\n  gem &#39;puppet-lint-undef_in_function-check&#39;,              :require =&gt; false\n  gem &#39;puppet-lint-leading_zero-check&#39;,                   :require =&gt; false\n  gem &#39;puppet-lint-trailing_comma-check&#39;,                 :require =&gt; false\n  gem &#39;puppet-lint-file_ensure-check&#39;,                    :require =&gt; false\n  gem &#39;puppet-lint-version_comparison-check&#39;,             :require =&gt; false\n  \n  # You can also use the voxpupuli-test gem,\n  # which pulls rake, puppet-lint &amp; plugins as dependencies\n  gem &#39;voxpupuli-test&#39;,                                   :require =&gt; false\nend</code>\n        </deckgo-highlight-code>\n<p>You can then run the lint with:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ bundle install --path vendor/bundle $ bundle exec rake lint\n# And if you want to autofix the detected mistakes\n$ PUPPETLINT_FIX=yes bundle exec rake lint</code>\n        </deckgo-highlight-code>\n<h2>Onceover Code Quality</h2>\n<p><a href=\"https://github.com/dylanratcliffe/onceover\">Onceover</a> is a toolbox to automate tasks for Puppet control repositories. Among other things, its code quality plugin allows to run syntax checks and invoke\n<code>puppet-lint</code>\n.</p>\n<p>In order to use it, update your\n<code>Gemfile</code>\n, e.g.:</p>\n<deckgo-highlight-code ruby   highlight-lines=\"\">\n          <code slot=\"code\">source ENV[&#39;GEM_SOURCE&#39;] || &quot;https://rubygems.org&quot;\n\ngroup :development, :test do\n  gem &#39;voxpupuli-test&#39;,                                   :require =&gt; false\n  \n  gem &#39;onceover&#39;,                                         :require =&gt; false\n  gem &#39;onceover-codequality&#39;,                             :require =&gt; false\nend</code>\n        </deckgo-highlight-code>\n<p>Refresh your bundle and run onceover:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ bundle update $ bundle exec onceover run codequality --no-docs</code>\n        </deckgo-highlight-code>\n<p>Ideally, run this on every commit in a Continuous Integration/Continuous Deployment setup. At Camptocamp, we use a <a href=\"https://docs.gitlab.com/ee/ci/\">GitLab CI</a> pipeline to check our control repo using Onceover before deploying it with <a href=\"https://github.com/puppetlabs/r10k\">r10k</a> (also running a GitLab CI runner).</p>\n<p><img src=\"https://www.camptocamp.com/wp-content/uploads/puppetmaster_pipeline.png\" alt=\"PuppetMaster Pipeline in GitLab CI\"></p>\n<h1>Getting rid of dead code</h1>\n<p>You've checked the quality of your existing code. Good! But what if you're actually maintaining and cleaning code that you don't use anymore? This would be quite the waste of time... At Camptocamp, we've built on\n<code>puppet-lint</code>\nto provide a system to detect unused code and help us clean it up. </p>\n<p>This is what the <a href=\"https://github.com/camptocamp/puppet-ghostbuster\">puppet-ghostbuster</a> project is for. Under the hood,\n<code>puppet-ghostbuster</code>\nis a collection of\n<code>puppet-lint</code>\nplugins, distributed in a single\n<code>puppet-ghostbuster</code>\ngem.</p>\n<p>These plugins analyze your Puppet code and then connect to your PuppetDB to check if that code is actually used for any known node. It can also check Hiera data for unused keys. Just as previously, you can set it up as a Rake task, but our current setup requires a <a href=\"https://github.com/rodjek/puppet-lint/pull/919\">patch</a> to\n<code>puppet-lint</code>\nin order to whitelist the\n<code>puppet-lint</code>\nchecks activated (the current release of\n<code>puppet-lint</code>\nonly supports blacklisting checks in Rake tasks).</p>\n<deckgo-highlight-code ruby   highlight-lines=\"\">\n          <code slot=\"code\">source ENV[&#39;GEM_SOURCE&#39;] || &quot;https://rubygems.org&quot;\n\ngroup :development, :test do\n  gem &#39;rake&#39;,                                             :require =&gt; false\n  gem &#39;puppet-lint&#39;,                                      :require =&gt; false,\n    :git =&gt; &#39;https://github.com/raphink/puppet-lint&#39;,\n    :ref =&gt; &#39;2cac4fb&#39;   # Includes patch for whitelisting checks\n  gem &#39;puppet-ghostbuster&#39;,                               :require =&gt; false\nend</code>\n        </deckgo-highlight-code>\n<p>You can then set up a Rake task such as this one:</p>\n<deckgo-highlight-code ruby   highlight-lines=\"\">\n          <code slot=\"code\">PuppetLint::RakeTask.new :ghostbuster do |config|\n  config.pattern = [&#39;./site/**/*&#39;]\n  config.only_checks = [\n    &#39;ghostbuster_classes&#39;,\n    &#39;ghostbuster_defines&#39;,\n    &#39;ghostbuster_facts&#39;,\n    &#39;ghostbuster_files&#39;,\n    &#39;ghostbuster_functions&#39;,\n    &#39;ghostbuster_hiera_files&#39;,\n    &#39;ghostbuster_templates&#39;,\n    &#39;ghostbuster_types&#39;,\n  ]\n  config.fail_on_warnings = true\nend</code>\n        </deckgo-highlight-code>\n<p><code>puppet-ghostbuster</code>\nrequires info to connect to your PuppetDB, so you need to provide the following environment variables:</p>\n<ul>\n<li></li>\n</ul>\n<p><code>PUPPETDB_URL</code>\n: URLs to PuppetDB</p>\n<ul>\n<li></li>\n</ul>\n<p><code>PUPPETDB_CERT_FILE</code>\n: path to the certificate to use to connect to\nPuppetDB</p>\n<ul>\n<li></li>\n</ul>\n<p><code>PUPPETDB_KEY_FILE</code>\n: path to the key to use to connect to PuppetDB</p>\n<ul>\n<li></li>\n</ul>\n<p><code>PUPPETDB_CACERT_FILE</code>\n: path to the Puppet CA certificate</p>\n<ul>\n<li></li>\n</ul>\n<p><code>HIERA_YAML_PATH</code>\n: path to\n<code>hiera.yaml</code>\nto use</p>\n<p>If you don't want to provide certificates and keys, you can connect to the PuppetDB through the unencrypted port 8080, for example by forwarding it through SSH. At Camptocamp, we're automating this setup by using <a href=\"https://github.com/cyberark/summon\">summon</a> as a wrapper to launch the command.</p>\n<p>We store the certificates and keys to connect to PuppetDB in <a href=\"https://github.com/gopasspw/gopass\">gopass</a>, then provide a\n<code>secrets.yml</code>\nfile like so:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">PUPPETDB_URL: https://puppetdb.example.com:8081\nPUPPETDB_CERT_FILE: !var:file path/to/secret:cert\nPUPPETDB_KEY_FILE: !var:file path/to/secret:key\nPUPPETDB_CACERT_FILE: !var:file path/to/secret:cacert\nHIERA_YAML_PATH: ./hiera.yaml</code>\n        </deckgo-highlight-code>\n<p>Which allows to run:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ summon bundle exec rake ghostbuster</code>\n        </deckgo-highlight-code>\n<p>This returns a list of classes, defines, files, templates, etc. that are unused in our code. We can then check these results and clean up our code! </p>\n<p>Do you have ideas to contribute to\n<code>puppet-ghostbuster</code>\n? <a href=\"https://github.com/camptocamp/puppet-ghostbuster\">Pull requests are welcome</a>! You can also <a href=\"https://www.camptocamp.com/contact/\">contact us</a> for quotes on Puppet consulting or <a href=\"https://www.camptocamp.com/formations/\">training</a>!</p>\n<p><em>This post was originally published on <a href=\"https://www.camptocamp.com/actualite/cleaning-up-puppet-code/\">https://www.camptocamp.com/actualite/cleaning-up-puppet-code/</a></em></p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/cleaning-up-puppet-code-4da2\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/configuration-surgery-with-go-structure-tags-12a4/","relativePath":"posts/configuration-surgery-with-go-structure-tags-12a4.md","relativeDir":"posts","base":"configuration-surgery-with-go-structure-tags-12a4.md","name":"configuration-surgery-with-go-structure-tags-12a4","frontmatter":{"title":"Configuration surgery with Go structure tags","stackbit_url_path":"posts/configuration-surgery-with-go-structure-tags-12a4","date":"2020-06-10T20:55:35.728Z","excerpt":"Narcissus is a reflection library letting you edit configuration files in Go","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--vEyenOH7--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/p9blragy3jsbexp72cs0.jpg","comments_count":0,"positive_reactions_count":5,"tags":["go","opensource","augeas","showdev"],"canonical_url":"https://dev.to/raphink/configuration-surgery-with-go-structure-tags-12a4","template":"post"},"html":"<p>From Docker to Kubernetes, from Consul to Terraform, Go has been used increasingly in system tools these last years.</p>\n<p>Since most of these system tools manage systems running on Unix systems, one of their core tasks is to deal with files, and <a href=\"https://dev.to/camptocamp-ops/how-to-manage-files-with-puppet-55e4\">configuration files in particular</a>.</p>\n<h1>Augeas: the configuration management scalpel</h1>\n<p><a href=\"https://augeas.net/\">Augeas</a> is a C library to modify configuration files. It allows to parse files with many different syntax (over 300 by default), modify the configuration using a tree accessed with an XPath-like language, and write back the configuration.</p>\n<p>It tries hard to modify only what you mean to, keeping all details (spaces, indentations, new lines, comments) unchanged.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/xij7ocuklz6agg2rm95h.png\" alt=\"Augeas\"></p>\n<p>Because of its history, Augeas is mainly known in the Puppet world. However, there are also plugins for <a href=\"https://github.com/paluh/ansible-augeas\">Ansible</a>, <a href=\"https://github.com/nhuff/chef-augeas\">Chef</a>, <a href=\"https://docs.saltstack.com/en/latest/ref/states/all/salt.states.augeas.html\">SaltStack</a>, <a href=\"https://github.com/RexOps/rex-augeas\">(R)?ex</a> and more tools… Augeas is also used directly in C libraries such as libvirt and Nut.</p>\n<h1>Augeasproviders</h1>\n<p>In the Puppet world, the <a href=\"http://augeasproviders.com/\">Augeasproviders project</a> was created to develop native Puppet types and providers (in Ruby) based on Augeas.</p>\n<p>These providers use the Augeas Ruby bindings to draw on Augeas' power, all the while providing a simple interface for users, without the need to know how Augeas works.</p>\n<p>At the core of the Augeasproviders project, there is a base provider shipped in the <a href=\"https://github.com/hercules-team/augeasproviders_core\">hearculesteam-augeasproviders_core</a> Puppet module, which provides an interface to build more providers, in a declarative way.</p>\n<p>For example, you can set the location of the node corresponding to the Puppet resource to manage in the Augeas tree:</p>\n<deckgo-highlight-code ruby   highlight-lines=\"\">\n          <code slot=\"code\">resource_path do |resource|\n  service = resource[:service]\n  type = resource[:type]\n  mod = resource[:module]\n  control_cond = (resource[:control_is_param] == :true) ? &quot;and \n  control=&#39;# {resource[:control]}&#39;&quot; : &#39;&#39;\n  if target == &#39;/etc/pam.conf&#39;\n    &quot;$target/*[service=&#39;# {service}&#39; and type=&#39;# {type}&#39; and module=&#39;# {mod}&#39; # {control_cond}]&quot;\n  else\n    &quot;$target/*[type=&#39;# {type}&#39; and module=&#39;# {mod}&#39; # {control_cond}]&quot;\n  end\nend</code>\n        </deckgo-highlight-code>\n<p>The\n<code>create</code>\nand\n<code>destroy</code>\nmethods, as well as the getters and setters for the Puppet resource properties, can also be described in a similar fashion, making it simpler to <a href=\"https://github.com/hercules-team/augeasproviders/blob/master/docs/development.md\">develop new providers based on Augeas</a>.</p>\n<h1>Go bindings</h1>\n<p>As for many other languages, there are Go bindings for Augeas: </p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/github?args=dominikh%2Fgo-augeas\" style=\"border: 0; width: 100%;\"></iframe>\n<p>Much like the Ruby bindings, the go library lets you manipulate an Augeas handler to query the Augeas tree, modify it, and save it.</p>\n<h1>Go structure tags</h1>\n<p>In the Go world, structures have optional tags which can be used for parsing and writing to external formats.</p>\n<p>This is used to reflect structures as JSON, YAML, XML, or specify library options to manage the structure fields: </p>\n<deckgo-highlight-code go   highlight-lines=\"\">\n          <code slot=\"code\">// Version is an S3 bucket version\ntype Version struct {\n  ID           uint      `\nsql:&quot;AUTO_INCREMENT&quot; gorm:&quot;primary_key&quot; json:&quot;-&quot;\n`\n  VersionID    string    `\ngorm:&quot;index&quot; json:&quot;version_id&quot;\n`\n  LastModified time.Time `\njson:&quot;last_modified&quot;\n`\n}</code>\n        </deckgo-highlight-code>\n<p>They are also used to build program interfaces by <a href=\"https://github.com/jessevdk/go-flags\">specifying configuration options</a>:</p>\n<deckgo-highlight-code go   highlight-lines=\"\">\n          <code slot=\"code\">type config struct {\n  Version        bool     `\nshort:&quot;V&quot; long:&quot;version&quot; description:&quot;Display version.&quot;\n`\n  Token          string   `\nshort:&quot;t&quot; long:&quot;token&quot; description:&quot;GitHub token&quot; env:&quot;GITHUB_TOKEN&quot;\n`\n  Users          []string `\nshort:&quot;u&quot; long:&quot;users&quot; description:&quot;GitHub users to include (comma separated).&quot; env:&quot;GITHUB_USERS&quot; env-delim:&quot;,&quot;\n`\n  Manpage        bool     `\nshort:&quot;m&quot; long:&quot;manpage&quot; description:&quot;Output manpage.&quot;\n`\n}</code>\n        </deckgo-highlight-code>\n<p>The tags above (\n<code>sql</code>\n,\n<code>gorm</code>\n,\n<code>json</code>\n,\n<code>short</code>\n,\n<code>long</code>\n,\n<code>description</code>\n,\n<code>env</code>\n,\n<code>env-delim</code>\n) are used by Go libraries through the <a href=\"https://golang.org/pkg/reflect/\">Go reflection library</a> to provide dynamic features for structures.</p>\n<h1>Narcissus: Augeasproviders for the Go world</h1>\n<p>While Hercules is known in Greek mythology for his works —including cleaning the stables of King Augeas—, Narcissus is famous for gazing at his reflection in the water.</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/github?args=raphink%2Fnarcissus\" style=\"border: 0; width: 100%;\"></iframe>\n<p>The Narcissus project is a Go library providing structure tags to manage configuration files with Augeas. It then maps structure tags to the Augeas tree dynamically, allowing you to expose any configuration file (or file stanza) known to Augeas as a Go structure.</p>\n<h2>Example of</h2>\n<p><code>/etc/group</code></p>\n<p>The Unix\n<code>group</code>\nfile is very simple and well-known. It features one group per line, with fields separated by colons:</p>\n<deckgo-highlight-code    highlight-lines=\"undefined\">\n          <code slot=\"code\">root:x:0:\ndaemon:x:1:\nbin:x:2:\nsys:x:3:\nadm:x:4:syslog,raphink</code>\n        </deckgo-highlight-code>\n<h3>Parsing with Augeas</h3>\n<p>Augeas parses it by storing each group name as a node key in the tree, and exposing each field by its name:</p>\n<deckgo-highlight-code console   highlight-lines=\"\">\n          <code slot=\"code\">$ augtool print /files/etc/group\n/files/etc/group\n/files/etc/group/root\n/files/etc/group/root/password = &quot;x&quot;\n/files/etc/group/root/gid = &quot;0&quot;\n/files/etc/group/daemon\n/files/etc/group/daemon/password = &quot;x&quot;\n/files/etc/group/daemon/gid = &quot;1&quot;\n/files/etc/group/bin\n/files/etc/group/bin/password = &quot;x&quot;\n/files/etc/group/bin/gid = &quot;2&quot;\n/files/etc/group/sys\n/files/etc/group/sys/password = &quot;x&quot;\n/files/etc/group/sys/gid = &quot;3&quot;\n/files/etc/group/adm\n/files/etc/group/adm/password = &quot;x&quot;\n/files/etc/group/adm/gid = &quot;4&quot;\n/files/etc/group/adm/user[1] = &quot;syslog&quot;\n/files/etc/group/adm/user[2] = &quot;raphink&quot;</code>\n        </deckgo-highlight-code>\n<p>Modifying any of these fields and saving the tree will result in an updated\n<code>/etc/group</code>\nfile. Adding new entries in the tree will result in additional entries in\n<code>/etc/group</code>\n, provided the tree is valid for the\n<code>Group.lns</code>\nAugeas lens.</p>\n<h2>Parsing with Narcissus</h2>\n<p>In our Go code, we can map a\n<code>group</code>\nstructure to entries in the\n<code>/etc/group</code>\nfile easily by using the Narcissus package:</p>\n<deckgo-highlight-code go   highlight-lines=\"\">\n          <code slot=\"code\">import (\n\t&quot;log&quot;\n\n\t&quot;honnef.co/go/augeas&quot;\n\t&quot;github.com/raphink/narcissus&quot;\n)\n\ntype group struct {\n\taugeasPath string\n\tName       string   `\nnarcissus:&quot;.,value-from-label&quot;\n`\n\tPassword   string   `\nnarcissus:&quot;password&quot;\n`\n\tGID        int      `\nnarcissus:&quot;gid&quot;\n`\n\tUsers      []string `\nnarcissus:&quot;user&quot;\n`\n}\n\nfunc main() {\n\taug, err := augeas.New(&quot;/&quot;, &quot;&quot;, augeas.None)\n\tif err != nil {\n\t\tlog.Fatal(&quot;Failed to create Augeas handler&quot;)\n\t}\n\tn := narcissus.New(&amp;aug)\n\n\tgroup := &amp;group{\n\t\taugeasPath: &quot;/files/etc/group/docker&quot;,\n\t}\n\terr = n.Parse(group)\n\tif err != nil {\n\t\tlog.Fatalf(&quot;Failed to retrieve group: %v&quot;, err)\n\t}\n\n\tlog.Printf(&quot;GID=%v&quot;, group.GID)\n\tlog.Printf(&quot;Users=%v&quot;, strings.Join(group.Users, &quot;,&quot;))\n}</code>\n        </deckgo-highlight-code>\n<p>The\n<code>augeasPath</code>\nfield is necessary to store the location of the file in the Augeas tree, in our case\n<code>/files/etc/group/docker</code>\nto manage the\n<code>docker</code>\ngroup in the file.</p>\n<p>Then each structure field is linked to the corresponding node name in the Augeas tree:</p>\n<ul>\n<li>Name is taken from the node label, so we use the special value\n<code>.,value-from-label</code>\n, where\n<code>.</code>\nrefers to the current node, and\n<code>value-from-label</code>\ntells Narcissus how to get the value</li>\n<li></li>\n</ul>\n<p><code>password</code>\nfor the Password</p>\n<ul>\n<li></li>\n</ul>\n<p><code>gid</code>\nfor the GID</p>\n<ul>\n<li></li>\n</ul>\n<p><code>user</code>\nfor the Users, parsed as a slice of strings (i.e. the\n<code>user</code>\nlabel might appear more than once in the Augeas tree)</p>\n<p>Note that all fields must be capitalized in order for Go reflection to work.</p>\n<p>Once we call the\n<code>Parse()</code>\nmethod on the Narcissus handler, the structure is dynamically filled with the values in the Augeas tree, so we can access the gid with\n<code>group.GID</code>\nand the users with\n<code>group.Users</code>\n.</p>\n<h2>Modifying files</h2>\n<p>The main point of the Augeas library is not just to parse, but also to modify configuration files in a versatile way.</p>\n<p>In Narcissus, this is done by calling the\n<code>Write()</code>\nmethod on the Narcissus handler. Narcissus then transforms the structure back to the Augeas tree and saves it.</p>\n<p>For example, using the\n<code>PasswdUser</code>\ntype provided by default in the\n<code>narcissus</code>\npackage:</p>\n<deckgo-highlight-code go   highlight-lines=\"\">\n          <code slot=\"code\">user := n.NewPasswdUser(&quot;raphink&quot;)\n\n// Modify UID\nuser.UID = 42\n\nif err := n.Write(user); err != nil {\n  log.Fatalf(&quot;Failed to save user: %v&quot;, err)\n}</code>\n        </deckgo-highlight-code>\n<h2>Included formats</h2>\n<p>Narcissus comes with a few structures already mapped:</p>\n<ul>\n<li></li>\n</ul>\n<p><code>/etc/fstab</code>\n, with the\n<code>NewFstab()</code>\nmethod</p>\n<ul>\n<li></li>\n</ul>\n<p><code>/etc/hosts</code>\nwith the\n<code>NewHosts()</code>\nmethod</p>\n<ul>\n<li></li>\n</ul>\n<p><code>/etc/passwd</code>\nwith\n<code>NewPasswd()</code>\nand\n<code>NewPasswdUser()</code>\nmethods</p>\n<ul>\n<li></li>\n</ul>\n<p><code>/etc/services</code>\nwith\n<code>NewServices()</code>\nand\n<code>NewService()</code>\nmethods</p>\n<p>Which structures will you map with it? Which tool could benefit from this library?</p>\n<p>Let me know in the comments!</p>\n<p><em><a href=\"https://dev.to/raphink/configuration-surgery-with-go-structure-tags-12a4\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/diffing-puppet-environments-1fno/","relativePath":"posts/diffing-puppet-environments-1fno.md","relativeDir":"posts","base":"diffing-puppet-environments-1fno.md","name":"diffing-puppet-environments-1fno","frontmatter":{"title":"Diffing Puppet Environments","stackbit_url_path":"posts/diffing-puppet-environments-1fno","date":"2020-05-01T14:49:22.209Z","excerpt":"Puppet Catalog Diff helps to visualize the differences between two Puppet environments","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--X2kztkXc--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://www.camptocamp.com/wp-content/uploads/xformations_puppet1-720x400.png.pagespeed.ic.UU2oY1Zlj8.webp","comments_count":0,"positive_reactions_count":5,"tags":["puppet","devops","testing","codequality"],"canonical_url":"https://dev.to/camptocamp-ops/diffing-puppet-environments-1fno","template":"post"},"html":"<p><a href=\"https://puppet.com\">Puppet</a> is a great tool for configuration managements, allowing to automate hundreds to thousands of nodes at a time in an Infrastructure-as-Code approach.</p>\n<h1>Usual Puppet Control Repository Workflow</h1>\n<p>Good practice usually encourages to use multiple environments in a Puppet setup. Usually, critical nodes are pinned to the\n<code>production</code>\nenvironment, while less critical nodes can be associated with staging environments.</p>\n<p>Using Git along with a <a href=\"https://github.com/puppetlabs/control-repo\">Control Repository</a>, code changes are typically produced in feature branches which get turned to Puppet environments. Once features have been tested on said environment, the feature branch can be merged into a staging branch, where the changes will start affecting nodes pinned to that Puppet environment.</p>\n<p>Finally, once in a while, changes are merged from the staging branch into the production branch, thus affecting all nodes pinned to production.</p>\n<h1>Code Validation</h1>\n<p>While the workflow described is helpful, validating a branch is often a lacking process. Pointing all staging nodes to a feature branch is missing the point entirely, so validation is often done manually, by identifying nodes that <em>may</em> be impacted by the change and running Puppet manually on these nodes, preferably in dry-run mode (\n<code>--noop</code>\n).</p>\n<p>When deploying to hundreds of nodes, testing a few is hardly a guaranty that things will go well on all nodes once the branch is merged.</p>\n<p>Fortunately for us, there are tools which can help!</p>\n<h1>Puppet Catalog Diff</h1>\n<p>It might be hard to believe as this module is so poorly known, but the Puppet Catalog Diff project was started some 10 years ago by <a href=\"https://www.devco.net\">R.I. Pienaar</a>! <a href=\"https://github.com/acidprime/puppet-catalog-diff\">Adopted by Zack Smith</a>, it was maintained for a few years, but left mainly unmaintained since 2016.</p>\n<p>As we've used it for years (and GitHub's <a href=\"https://github.com/github/octocatalog-diff\">octocatalog_diff</a> never fit my need), we've adopted it and you will now find the latest version on our GitHub account:</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/github?args=camptocamp%2Fpuppet-catalog-diff%20no-readme\" style=\"border: 0; width: 100%;\"></iframe>\n<h1>Installing</h1>\n<p>Puppet Catalog Diff is a standard Puppet module. You can thus install it using\n<code>puppet module install</code>\n, r10k, or even just git.</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ git clone https://github.com/camptocamp/puppet-catalog-diff.git /etc/puppetlabs/code/modules/catalog_diff</code>\n        </deckgo-highlight-code>\n<h1>What does it do?</h1>\n<p>As its name implies, Puppet Catalog Diff allows to perform diffs between Puppet catalogs.</p>\n<p>The module provides three Puppet faces:</p>\n<ul>\n<li></li>\n</ul>\n<p><code>puppet catalog seed</code>\ngenerates catalogs from a Puppet Master (or PuppetDB)</p>\n<ul>\n<li></li>\n</ul>\n<p><code>puppet catalog pull</code>\nwraps around the\n<code>seed</code>\nface to retrieve catalogs from two environments for each node</p>\n<ul>\n<li></li>\n</ul>\n<p><code>puppet catalog diff</code>\nanalyzes multiple catalogs and returns the differences per node</p>\n<h2>Local Diff</h2>\n<p>To get started, you can diff local catalogs (in\n<code>.yaml</code>\n,\n<code>.pson</code>\n, or\n<code>.yaml</code>\nformats):</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ puppet catalog diff catalog1.pson catalog2.pson</code>\n        </deckgo-highlight-code>\n<p>will return the differences between the two catalogs.</p>\n<h2>Diff with Catalog Retrieval</h2>\n<p>Most often, you will want to use Puppet Catalog Diff to retrieve catalogs from Puppet Masters.</p>\n<h3>Set up</h3>\n<h4>Generate a Certificate</h4>\n<p>Everything the Puppet world uses OpenSSL for authentication. Setting up Puppet Catalog Diff will thus require an OpenSSL certificate. This can be any certificate signed by the Puppet CA. For example, you can use the <a href=\"https://puppet.com/docs/puppet/latest/puppet_server_ca_cli.html\">\n<code>puppetserver ca</code>\ncommand</a> to generate a certificate:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ puppetserver ca generate --certname catalog-diff</code>\n        </deckgo-highlight-code>\n<p>Retrieve the key and certificate.</p>\n<h4>Set up the Puppet Master</h4>\n<p>By default, Puppet Masters only deliver catalogs for the nodes requesting them. This is set up in the <a href=\"https://puppet.com/docs/puppetserver/latest/config_file_auth.html\">\n<code>auth.conf</code>\n</a> configuration file, with a rule like:</p>\n<deckgo-highlight-code ruby   highlight-lines=\"\">\n          <code slot=\"code\">{\n    # Allow nodes to retrieve their own catalog\n    match-request: {\n        path: &quot;^/puppet/v3/catalog/([^/]+)$&quot;\n        type: regex\n        method: [get, post]\n    }\n    allow: &quot;$1&quot;\n    sort-order: 500\n    name: &quot;puppetlabs catalog&quot;\n},</code>\n        </deckgo-highlight-code>\n<p>You can deploy this rule using the\n<code>puppet_authorization::rule</code>\ndefined type from the <a href=\"https://forge.puppet.com/puppetlabs/puppet_authorization\">puppet_authorization</a> Puppet module.</p>\n<p>To allow the\n<code>catalog-diff</code>\ncertificate to access get any catalog from the Puppet Master, we can modify that rule:</p>\n<deckgo-highlight-code ruby   highlight-lines=\"\">\n          <code slot=\"code\">{\n    # Allow nodes to retrieve their own catalog\n    match-request: {\n        path: &quot;^/puppet/v3/catalog/([^/]+)$&quot;\n        type: regex\n        method: [get, post]\n    }\n    allow: [&quot;$1&quot;,&quot;catalog-diff&quot;]\n    sort-order: 500\n    name: &quot;puppetlabs catalog&quot;\n},</code>\n        </deckgo-highlight-code>\n<p>Even better yet, we can <a href=\"https://puppet.com/docs/puppet/latest/ssl_attributes_extensions.html\">add a certificate extension</a> to the catalog diff certificate, e.g.\n<code>pp_authorization: catalog</code>\nand allow this extension in\n<code>auth.conf</code>\n:</p>\n<deckgo-highlight-code ruby   highlight-lines=\"\">\n          <code slot=\"code\">{\n    # Allow nodes to retrieve their own catalog\n    match-request: {\n        path: &quot;^/puppet/v3/catalog/([^/]+)$&quot;\n        type: regex\n        method: [get, post]\n    }\n    allow: [\n        &quot;$1&quot;,\n        {\n            extensions: {\n                pp_authorization: &quot;catalog&quot;\n            }\n        }\n    ]\n    sort-order: 500\n    name: &quot;puppetlabs catalog&quot;\n},</code>\n        </deckgo-highlight-code>\n<h3>Comparing Environments</h3>\n<p>When comparing environments, Puppet Catalog Diff will connect to one or multiple Puppet Masters and get catalogs for each node.</p>\n<p>As you may have many nodes to test, it is easier to get the list of nodes to analyze from the PuppetDB. This can be achieved with the\n<code>--use_puppetdb</code>\n, along with\n<code>--filter_old_env</code>\n. This will select all the active nodes in the Puppet associated with the first environment.</p>\n<p>For example, if we run:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ puppet catalog diff \\\n     puppet.example.com/production \\\n     puppet.example.com/staging \\\n     --use_puppetdb --filter_old_env</code>\n        </deckgo-highlight-code>\n<p><em>Note (2020-05-07): Since the release of Puppet Catalog Diff 2.0.0,\n`--use</em>puppetdb<code>is now deprecated and</code>--filter<em>old</em>env`\nis the default._</p>\n<p>Puppet Catalog Diff will connect to the PuppetDB, get all the active nodes from the\n<code>production</code>\nenvironment, and then for each of them, retrieve a catalog for the node from:</p>\n<ul>\n<li>the\n<code>production</code>\nenvironment on the\n<code>puppet.example.com</code>\nPuppet Master</li>\n<li>the\n<code>staging</code>\nenvironment on the\n<code>puppet.example.com</code>\nPuppet Master</li>\n</ul>\n<p>It will then compute differences between each pair of catalogs and output them.</p>\n<h2>Testing Version Upgrades</h2>\n<p>One type of check that is very necessary is testing changes between two versions of Puppet Master installations. Puppet Catalog Diff allows you to specify different masters for the two environments to compare, so you can use the following command to compare catalogs from two Puppet Masters on the same Puppet environment (provided the environment is deployed to both masters):</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ puppet catalog diff \\\n    puppet5.example.com/production \\\n    puppet6.example.com/production \\\n    --use_puppetdb --filter_old_env</code>\n        </deckgo-highlight-code>\n<h1>Improving Performance</h1>\n<p>Retrieving and comparing catalogs can be resource-consuming. Very often, you will want to diff a new environment (staging or feature) against a more stable one. Since we can get the nodes associated to the stable environment from PuppetDB, we might as well get the cached catalogs from PuppetDB for this branch, too. This is possible using the\n<code>--old_catalog_from_puppetdb</code>\nflag:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ puppet catalog diff \\\n     puppet.example.com/production \\\n     puppet.example.com/staging \\\n     --use_puppetdb --filter_old_env --old_catalog_from_puppetdb</code>\n        </deckgo-highlight-code>\n<p>Catalogs from will retrieved from PuppetDB for the\n<code>production</code>\nenvironment, and from the Puppet Master for the\n<code>staging</code>\nenvironment.</p>\n<h2>Tuning the diff</h2>\n<p>Several options are available to tune the diff output:</p>\n<ul>\n<li></li>\n</ul>\n<p><code>--show_resource_diff</code>\nwill show the details of how each resource was modified</p>\n<ul>\n<li></li>\n</ul>\n<p><code>--content_diff</code>\nwill generate a separate content diff for file contents, in addition to the parameters diff</p>\n<ul>\n<li></li>\n</ul>\n<p><code>--changed_depth 1000</code>\nsets the number of nodes to display at the end of the diff, sorted by amount of diffs</p>\n<h1>Trusted Facts and Certless Requests</h1>\n<p>Puppet provides a special variable named\n<code>$trusted</code>\nand called <a href=\"https://puppet.com/docs/puppet/latest/lang_facts_builtin_variables.html#%20trusted-facts\">Trusted Facts</a>. This variable contains information from the Puppet certificate. This allows the Puppet Master to get informations, such as the certname or the certificate extensions, and be sure that they could not be falsified.</p>\n<p>However, using these trusted facts in your Puppet code (or Hiera hierarchy) breaks compilation with Puppet Catalog Diff, since the catalog diff's certificate does not contain these trusted variables.</p>\n<p>If you are using Puppet 6.3 or up on your Puppet Master, you can make use of the new <a href=\"https://puppet.com/docs/puppetserver/latest/puppet-api/v4/catalog.html\">certless catalog API</a> to bypass this restriction.</p>\n<h2>Setup</h2>\n<p>Since this uses a different API endpoint, we need to set up\n<code>auth.conf</code>\nfor it, for example:</p>\n<deckgo-highlight-code ruby   highlight-lines=\"\">\n          <code slot=\"code\">{\n    # Allow nodes to retrieve their own catalog\n    match-request: {\n        path: &quot;^/puppet/v4/catalog&quot;\n        type: regex\n        method: [post]\n    }\n    allow: [\n        {\n            extensions: {\n                pp_authorization: &quot;catalog&quot;\n            }\n        }\n    ]\n    sort-order: 500\n    name: &quot;puppetlabs certless catalog&quot;\n},</code>\n        </deckgo-highlight-code>\n<h2>Usage</h2>\n<p>The\n<code>--certless</code>\nflag will tell Puppet Catalog Diff to use the new certless catalog API in place of the standard one.</p>\n<p>For example, you can retrieve the old catalogs from PuppetDB and the new catalogs from the certless catalog API:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ puppet catalog diff \\\n     puppet.example.com/production \\\n     puppet.example.com/staging \\\n     --use_puppetdb --filter_old_env \\\n     --old_catalog_from_puppetdb --certless</code>\n        </deckgo-highlight-code>\n<h1>CI Integration</h1>\n<p>If you are using a Continuous Integration platform, you can get advantage of it by integrating your Puppet control repository into it with Puppet Catalog Diff.</p>\n<p>While general <a href=\"https://dev.to/camptocamp-ops/cleaning-up-puppet-code-4da2\">Code Quality tasks</a> can be launched in a pipeline before deploying the code, Puppet Catalog Diff is typically a task that can be lauched in a merge request.</p>\n<p>For example, you can launch the following command in a GitLab CI job:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ puppet catalog diff \\\n     puppet.example.com/${CI_MERGE_REQUEST_TARGET_BRANCH_NAME} \\\n     puppet.example.com/${CI_MERGE_REQUEST_SOURCE_BRANCH_NAME} \\\n     --show_resource_diff --content_diff --changed_depth 1000 \\\n     --use_puppetdb --filter_old_env --old_catalog_from_puppetdb \\\n     --certless --threads 4 \\\n     --output_report /srv/catalog-diff/mr_${CI_MERGE_REQUEST_IID}_${CI_JOB_ID}.json</code>\n        </deckgo-highlight-code>\n<p>The\n<code>--output_report</code>\noption saves the output as a JSON document, which can be used later on.</p>\n<h1>Limitations</h1>\n<p>Puppet Catalog Diff compares Puppet catalogs. However, catalog changes do not account for all changes in a Puppet agent run. Plugins can play a role, too.</p>\n<p>If your change involves a change in agent-side plugins (facts, types &#x26; providers, augeas lenses), Puppet Catalog Diff won't allow you to predict the result of these changes.</p>\n<h1>Visualizing changes</h1>\n<p>Changes in Puppet code sometimes generate a lot of diff, which can be hard to parse in text form.</p>\n<p>The <a href=\"https://github.com/camptocamp/puppet-catalog-diff-viewer\">Puppet Catalog Diff Viewer</a> project allows to visualize Puppet Catalog Diff reports (as generated by\n<code>--output_report</code>\noption) in a Web UI.</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/github?args=camptocamp%2Fpuppet-catalog-diff-viewer%20no-readme\" style=\"border: 0; width: 100%;\"></iframe>\n<p>This interface is currently read-only, with no persistence.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/b73sm2n5ie2yip4hh427.png\" alt=\"Puppet Catalog Diff Viewer\"></p>\n<p>Let me know how you use Puppet Catalog Diff, and, as usual, we welcome Pull Request!</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/diffing-puppet-environments-1fno\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/enhance-and-colorize-old-pictures-5c9g/","relativePath":"posts/enhance-and-colorize-old-pictures-5c9g.md","relativeDir":"posts","base":"enhance-and-colorize-old-pictures-5c9g.md","name":"enhance-and-colorize-old-pictures-5c9g","frontmatter":{"title":"Enhance, Colorize, and Animate Old Pictures","stackbit_url_path":"posts/enhance-and-colorize-old-pictures-5c9g","date":"2020-06-29T16:45:38.653Z","excerpt":"MyHeritage in Color allows to fine-tune automatically colorized and enhanced photographs","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--PKPnYlZ7--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/b78xjnsl62d9by6bkyoh.jpg","comments_count":0,"positive_reactions_count":11,"tags":["tutorial","ai","machinelearning","photography"],"canonical_url":"https://dev.to/raphink/enhance-and-colorize-old-pictures-5c9g","template":"post"},"html":"<p>Over the last 2 years, Machine Learning has brought impressive breakthrough to image processing techniques. In particular, photography colorization has seen amazing progress, thanks mainly to the work of two developers: <a href=\"https://twitter.com/citnaj\">Jason Antic</a> and <a href=\"https://twitter.com/danasday\">Dana Kelley</a>.</p>\n<h2>MyHeritage in Color</h2>\n<p>Their model is so precise that MyHeritage hired them to include a colorization tool directly on their website. As a result, you can colorize pictures for free at <a href=\"https://www.myheritage.fr/incolor\">https://www.myheritage.fr/incolor</a>. Paid MyHeritage members are not limited in the number colorizations, and don't get the MyHeritage watermark on the result pictures.</p>\n<p>Additionally, MyHeritage recently added a new AI-based feature to this tool, by allowing users to enhance faces in their pictures:</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/twitter?args=1276608740884721665\" style=\"border: 0; width: 100%;\"></iframe>\n<p>Using the tool from your MyHeritage picture collection is extremely simple. You just need to upload a picture and use the \"enhance\" and \"colorize\" buttons one after the other.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/wt2uwlf0t0dhu2w53w91.png\" alt=\"Colorization and Enhance tools\"></p>\n<h2>Tuning colorization</h2>\n<p>In the last few weeks, I've seen lots of people using MyHeritage In Color on their pictures, and they're usually quite content with the result. However, most of them have no idea they could get even better results by tuning the rendering.</p>\n<p>Once a picture is colorized, a gear icon appears to let you fine-tune the result:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/4prlyuwyikgo0qhi9v69.png\" alt=\"Tuning gear icon\"></p>\n<p>There are 4 parameters which can be tuned using this icon:</p>\n<ul>\n<li>Contrast enhancement: leave it checked usually. It is mainly useful if your input picture is already very contrasted, which is not the case of most pictures.</li>\n<li>Saturation: use this to keep the same colors, but make them \"stronger\"</li>\n<li>Automatic vs Manual rendering: this allows you to set a rendering factor manually. Generally, a lower factor brings more colors, but also more colorization \"mistakes\" (such a purple zones)</li>\n<li>Colorization model: this lets you use an alternate model, which at times looks better.</li>\n</ul>\n<h2>General observations</h2>\n<p>Generally speaking, this is what I have found from colorizing hundreds of pictures, essentially portraits of people:</p>\n<ul>\n<li>If clothes (in particular dark ones) have purple spots over them but the faces are rendered well, try switching to the \"Alternative\" model</li>\n<li>If faces are colorized properly but legs are grey, try the \"Alternative\" model as well</li>\n<li>If the colors are fine, but the colors aren't coming through strong enough, increase the saturation</li>\n<li>If the colors are too grey-ish (especially with low-res pictures), try reducing the rendering factor to 16.</li>\n<li>If the picture is high-res enough and renders rather well but has uncolorized spots, you can try increasing the rendering factor. Note that rendering will generally be longer with a higher factor.</li>\n</ul>\n<p>Here is an example:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/of5je7h1ou8nd48dtugh.png\" alt=\"Default settings\"></p>\n<p>This colorized (and enhanced) picture doesn't look bad, but the clothes have lots of purple spots all over them. Switching to the alternative model reduces this, without affecting the faces too much:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/epmwc8x4r4zp90nu6jju.png\" alt=\"Alternative model\"></p>\n<p>There's still some weird zones though, so I want to try and reduce these by increasing the rendering factor. After playing around, I ended up with a factor of 64 for that picture:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/sqv6a9c7s7rrb2gtb9l1.png\" alt=\"Render factor\"></p>\n<p>However, the colors are getting slightly dim now, so I've increased the saturation to 1.2 to counteract that:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/l6ttuj25eh7jxtzolp28.png\" alt=\"Saturation\"></p>\n<p>Granted, the results on this picture are far from perfect, as it is hard to colorize (which is why I chose it!).</p>\n<h2>Methodology</h2>\n<p>When tuning a colorization, every time you modify a parameter, you can preview the changes, and then choose to save them.</p>\n<p>In a very practical fashion, MyHeritage lets you click on the colorized picture to compare with the previously saved version.</p>\n<p>However, it doesn't save the results for each combination, so when you play around with parameters, you can end up waiting a long time to go back to previous combinations.</p>\n<p>For this reason, I use A/B testing when colorizing. My process is:</p>\n<ol>\n<li>Colorize the picture a first time</li>\n<li>Open the tuning interface</li>\n<li>Tune one parameter, preview and compare</li>\n<li>If the result is better than the previously saved picture, save it</li>\n<li>Repeat from step 2 until you can't improve the result</li>\n</ol>\n<p>The drawback of this method is that you need to re-open the tuning interface every time, but in the end, you'll save lots of rendering time and you know you're saving the best version you could get.</p>\n<h2>Fixing other color issues</h2>\n<p>While the settings in MyHeritage in Color clearly help, they can't —yet— get you to a perfect result most of the time.</p>\n<p>In many cases, some parameters will have better results on people, while others will improve objects or the background.</p>\n<p>One thing I've started doing is downloading multiple versions of the colorization, importing them all as layers in Gimp, and cutting them so as to get the best result in every zone.</p>\n<p>In other cases, the tint on some objects may not be perfect despite trying to tune the engine. In these cases, you can duplicate these zones in Gimp and tune their tint/saturation/hue individually until you get the result you want.</p>\n<h2>Improving Face Enhancing</h2>\n<p>Face enhancing can lead to very impressive results by recreating realistic faces that match the shadow of faces in the blur pictures.</p>\n<p>For example, in the picture I used before, some of the children's faces look amazing:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/2svy7z72gnqo6cl90oql.jpg\" alt=\"Face enhancement and colorization\"></p>\n<p>Yes, the original is on the left!</p>\n<p>At times however, little dots or other issues with the original picture can lead the face enhancer astray and generate some very disturbing results. Here for example, there's a line on the face in the original picture, which gets \"interpreted\" as a scar:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/i8d5qjc1ysxlegxooylj.jpg\" alt=\"Scar mouth\"></p>\n<p>And in this one, the left eye looks too dark in the original picture, leading to a strange color dissymetry:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/dbzchaqre1c8eyy3zc6e.jpg\" alt=\"Eye color dissymetry\"></p>\n<p>Fortunately, these aren't too hard to fix. In the case of the \"scar\", erasing it (using Gimp with simple editing methods such as the stamp tool) fixed the issue:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/b78xjnsl62d9by6bkyoh.jpg\" alt=\"Fixed scar mouth\"></p>\n<p>The eyes weren't much harder to fix. Since the face is straight, I just copied the right pupil (just the pupil, not the entire eye) and pasted it on the left eye. The difference is hardly noticeable in the B&#x26;W picture, but it's enough for the AI algorithm to pick it up properly:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/2xxhxugvmo4wqnxdj2qa.jpg\" alt=\"Fixed eye color\"></p>\n<h2>Edit (2021-02-26): Animate the Portrait</h2>\n<p>As of February 2021, MyHeritage now allows to animate portraits that were enhanced.</p>\n<p>From the picture view, simply click the animation button and choose the face to animate. You can choose from 10 different animations. If the face was colorized, the animation will honor it.</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/youtube?args=JVm5dEa9VlY\" style=\"border: 0; width: 100%;\"></iframe>\n<h2>How about you?</h2>\n<p>Have you colorized pictures with DeOldify or MyHeritage in Color? Have you found useful ways to get good results? Share them in the comments!</p>\n<p><em><a href=\"https://dev.to/raphink/enhance-and-colorize-old-pictures-5c9g\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/git-markdown-to-write-a-novel-ag6/","relativePath":"posts/git-markdown-to-write-a-novel-ag6.md","relativeDir":"posts","base":"git-markdown-to-write-a-novel-ag6.md","name":"git-markdown-to-write-a-novel-ag6","frontmatter":{"title":"Git & Markdown to write a novel","stackbit_url_path":"posts/git-markdown-to-write-a-novel-ag6","date":"2020-05-09T06:20:31.422Z","excerpt":"Using Git and Markdown to write a novel","thumb_img_path":null,"comments_count":0,"positive_reactions_count":1,"tags":["documentation","markdown","writing"],"canonical_url":"https://dev.to/raphink/git-markdown-to-write-a-novel-ag6","template":"post"},"html":"<p>This year, I started writing a historical novel about a branch of my family. I quickly realized I needed some tooling to organize my data: what I know about the characters, the places, a general timeline, etc.</p>\n<h1>Manuskript</h1>\n<p>I looked for software to do that and found that the reference is <a href=\"https://www.literatureandlatte.com/scrivener/overview\">Scrivener</a>. However interesting it looked, I'd rather use Open Source software whenever possible, so I started using <a href=\"https://www.theologeek.ch/manuskript/\">Manuskript</a>, an Open Source software for writers similar to Scrivener.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/fj7xlois1xl9d7qtzyjl.png\" alt=\"Manuskript on Ubuntu\"></p>\n<p>I laid out some chapters, characters, plots… then I realized the saving format was binary. It's actually a Zip archive which contains all the information in flat files:</p>\n<deckgo-highlight-code console   highlight-lines=\"\">\n          <code slot=\"code\">        1  2020-05-09 08:05   MANUSKRIPT\n      137  2020-05-09 08:05   infos.txt\n        0  2020-05-09 08:05   summary.txt\n       46  2020-05-09 08:05   status.txt\n      147  2020-05-09 08:05   labels.txt\n      284  2020-05-09 08:05   characters/0-Samuel_L-on.txt\n      281  2020-05-09 08:05   characters/1-L-on_Grunberg.txt\n      261  2020-05-09 08:05   characters/2-Adolphe_Grunberg.txt\n      309  2020-05-09 08:05   characters/3-Elisabeth_Rau.txt\n      112  2020-05-09 08:05   characters/4-Victor_Grunberg.txt\n      109  2020-05-09 08:05   characters/5-Maria_Schorr.txt\n      224  2020-05-09 08:05   characters/6-Fred_Grunberg.txt\n      125  2020-05-09 08:05   characters/7-Colonel_de_Villebois-Mareuil.txt\n      107  2020-05-09 08:05   characters/8-Said_Pacha.txt\n      109  2020-05-09 08:05   characters/9-Isaac_Aghion.txt\n       92  2020-05-09 08:05   outline/00-Setup.md\n      322  2020-05-09 08:05   outline/01-Les_joyaux_-gyptiens/folder.txt\n     3753  2020-05-09 08:05   outline/01-Les_joyaux_-gyptiens/0-Une_rencontre_inattendue.md\n     1706  2020-05-09 08:05   outline/01-Les_joyaux_-gyptiens/1-Office_de_chabbat.md\n      715  2020-05-09 08:05   outline/01-Les_joyaux_-gyptiens/2-Un_d-ner_chez_Isaac.md\n      482  2020-05-09 08:05   outline/01-Les_joyaux_-gyptiens/3-Proc-s_1.md\n      166  2020-05-09 08:05   outline/01-Les_joyaux_-gyptiens/4-Sc-ne_5.md\n       83  2020-05-09 08:05   outline/02-Paris/folder.txt\n      102  2020-05-09 08:05   outline/02-Paris/0-Sc-ne_1.md\n       97  2020-05-09 08:05   outline/03-Etudes_des_enfants/folder.txt\n      103  2020-05-09 08:05   outline/03-Etudes_des_enfants/0-Sc-ne_1.md\n      103  2020-05-09 08:05   outline/03-Etudes_des_enfants/1-Sc-ne_2.md\n       95  2020-05-09 08:05   outline/04-Premiers_emplois/folder.txt\n      103  2020-05-09 08:05   outline/04-Premiers_emplois/0-Sc-ne_1.md\n       96  2020-05-09 08:05   outline/05-Vers_le_Transvaal/folder.txt\n      103  2020-05-09 08:05   outline/05-Vers_le_Transvaal/0-Sc-ne_1.md\n      334  2020-05-09 08:05   outline/21-Epilogue.md\n    40074  2020-05-09 08:05   revisions.xml\n      275  2020-05-09 08:05   world.opml\n     1669  2020-05-09 08:05   plots.xml\n     2499  2020-05-09 08:05   settings.txt</code>\n        </deckgo-highlight-code>\n<p>This is rather good news, and there's ways to automate PDF creation from this within the software, good news again!</p>\n<p>However, some things were problematic to me:</p>\n<ul>\n<li>I like to tune my pandoc/LaTeX rendering, and the options in Manuskript were pretty limited for that</li>\n<li>I'd rather not commit a single Zip file in Git, so I would have preferred to have all the files in the current directory</li>\n<li>Manuskript cannot be automated to compile the PDF, you need to go through the GUI and click on buttons. That's a blocker for me.</li>\n</ul>\n<h1>Going the full-git way</h1>\n<p>So I decided I would do everything in Git + Markdown, without the help of a third-party application.</p>\n<h2>Chapters</h2>\n<p>I'm currently storing the chapters in their own directories, with numbered Markdown files:</p>\n<deckgo-highlight-code console   highlight-lines=\"\">\n          <code slot=\"code\">chapitres/\n├── 01_les_joyaux_egyptiens\n│   ├── 00_titre.md\n��   ├── 01_une_rencontre_innatendue.md\n│   ├── 02_office_de_chabbat.md\n│   ├── 03_brody.md\n│   ├── 04_chabbat_2.md\n│   ├── 05_diner_chez_isaac.md\n│   └── 06_audience.md\n├── 02_les_enfants_grunberg\n│   ├── 00_titre.md\n│   ├── 01_leon.md\n│   ├── 02_retour.md\n│   ├── 03_coffre.md\n│   ├── 04_burtaux.md\n│   ├── 05_article.md\n│   └── 06_article2.md\n├── 03_annees_noires\n│   └── 00_titre.md\n├── 04_etudes\n│   └── 00_titre.md\n├── 05_premier_travail\n│   └── 00_titre.md\n├── 06_le_creusot\n│   └── 00_titre.md\n└── 21_epilogue.md</code>\n        </deckgo-highlight-code>\n<h2>Characters &#x26; Places</h2>\n<p>The characters documentation is stored in Markdown files as well, and links can be made between them when necessary:</p>\n<deckgo-highlight-code console   highlight-lines=\"\">\n          <code slot=\"code\">personnages/\n├── Adolphe_Grunberg.md\n├── Charlotte_Grunberg.md\n├── Elisabeth_Rau.md\n├── Emilie_Grunberg.md\n├── Felix_Zottier.md\n├── Frederic_Grunberg.md\n├── Isaac_Aghion.md\n├── Jacques_Grunberg.md\n├── Leon_Grunberg.md\n├── Lucie_Leon.md\n├── Marc_Leon.md\n├── Mirel_Schorr.md\n├── Paul_Grunberg.md\n└── Samuel_Leon.md</code>\n        </deckgo-highlight-code>\n<p>Same goes for places:</p>\n<deckgo-highlight-code console   highlight-lines=\"\">\n          <code slot=\"code\">lieux/\n├── Alexandrie.md\n├── Brody.md\n├── Dubno.md\n├── Grunberg_Boulogne.md\n├── Paris.md\n├── Petite_Jonchere.md\n└── Vienne.md</code>\n        </deckgo-highlight-code>\n<p>Links can easily be made between these documentation files:</p>\n<deckgo-highlight-code markdown   highlight-lines=\"\">\n          <code slot=\"code\"># Mirel Schorr\n\n\n## État civil\n\n* Prénoms : Mirel, dite Maria\n* Nom : Grünberg\n* Nom de naissance : Schorr\n\n\n## Portrait\n\nInconnu\n\n\n## Description physique\n\nInconnu\n\n\n\n## Evénements\n\n\n* ca 1800 : ° à [Dubno](../lieux/Dubno.md) (Russie)\n* Enfance à [Brody](../lieux/Brody.md) (Autriche)\n  avec ses parents [Schachne](Schachne_Schorr.md)\n  et [Sarah](Sarah_Bick.md)\n  et ses frères [Naphtali](Naphtali_Mendel_Schorr.md)\n  et [Osias](Osias_Heschel_Schorr.md)\n\n* 1821: Mariage à [Brody](../lieux/Brody.md)\n\n* 1870 : habite à [Vienne](../lieux/Vienne.md) (testament [Adolphe](Adolphe_Grunberg.md))\n\n* 1877 : habite à [Paris](../lieux/Paris.md) (+)\n* 1877 : + à [Vienne](../lieux/Vienne.md)\n\n\n## Habillement\n\nVoir sa garde-robe en 1853 dans l&#39;inventaire de [Victor](Victor_Grunberg.md)</code>\n        </deckgo-highlight-code>\n<h2>Section stats</h2>\n<p>Manuskript provides stats about the number of words in a section. I've added a Make target for that:</p>\n<deckgo-highlight-code Makefile   highlight-lines=\"\">\n          <code slot=\"code\">stats:\n\tfind chapitres/ -name &quot;*.md&quot; -not -name &#39;00_titre.md&#39; -print0 | sort -z | xargs -0 wc -w</code>\n        </deckgo-highlight-code>\n<p>Caveat: it also lists the words in the comments…</p>\n<h1>Building the project</h1>\n<p>I'm using Pandoc to build the project with my own LaTeX template.</p>\n<deckgo-highlight-code Makefile   highlight-lines=\"\">\n          <code slot=\"code\">%.md:\n\tcat meta.md &gt; $@\n\tfind chapitres/ -name &quot;*.md&quot;  -print0 | sort -z | xargs -0 cat &gt;&gt; $@\n\n%.tex: %.md\n\tpandoc --pdf-engine lualatex  --template extended.tex \\\n\t\t   --variable numbersections --toc --variable toc-depth=2 \\\n\t\t   --variable documentclass=memoir --variable fontsize=12pt \\\n\t\t   --filter pandoc-citeproc \\\n\t\t   --verbose \\\n\t\t   $&lt; -o $@\n\n%.pdf: %.tex\n\tOSFONTDIR=$(FONTSDIR) lualatex $&lt;\n\tmakeindex $*.idx\n\tOSFONTDIR=$(FONTSDIR) lualatex $&lt;</code>\n        </deckgo-highlight-code>\n<p>The novel project can be found in this GitHub repository:</p>\n<iframe class=\"liquidTag\" src=\"https://dev.to/embed/github?args=raphink%2Fgenearoman\" style=\"border: 0; width: 100%;\"></iframe>\n<p><em><a href=\"https://dev.to/raphink/git-markdown-to-write-a-novel-ag6\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/how-to-allow-dynamic-terraform-provider-configuration-20ik/","relativePath":"posts/how-to-allow-dynamic-terraform-provider-configuration-20ik.md","relativeDir":"posts","base":"how-to-allow-dynamic-terraform-provider-configuration-20ik.md","name":"how-to-allow-dynamic-terraform-provider-configuration-20ik","frontmatter":{"title":"How to allow dynamic Terraform Provider Configuration","stackbit_url_path":"posts/how-to-allow-dynamic-terraform-provider-configuration-20ik","date":"2021-05-11T11:47:57.105Z","excerpt":"Terraform providers can be dynamically configured using other resource attributes if their code allows for it","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--rch8h5M6--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bjkdnrg8gmbjiazvn8l8.jpg","comments_count":0,"positive_reactions_count":7,"tags":["terraform","devops","go","cfgmgmt"],"canonical_url":"https://dev.to/camptocamp-ops/how-to-allow-dynamic-terraform-provider-configuration-20ik","template":"post"},"html":"<p><a href=\"http://terraform.io/\">Terraform</a> relies heavily on the concept of <a href=\"https://www.terraform.io/docs/providers/index.html\">providers</a>, a base brick which consists of Go plugins enabling the communication with an API.</p>\n<p>Each provider gives access to one or more resource types, and these resources then manage objects on the target API.</p>\n<p>Most of the time, a provider's configuration is static, e.g.</p>\n<deckgo-highlight-code hcl   highlight-lines=\"\">\n          <code slot=\"code\">provider &quot;aws&quot; {\n  region = &quot;us-east-1&quot;\n}</code>\n        </deckgo-highlight-code>\n<p>However, in some cases, it is useful to configure a provider dynamically, using the attribute values from other resources as input for the provider's configuration.</p>\n<p>I'll use the example of the <a href=\"https://github.com/oboukili/terraform-provider-argocd\">Argo CD provider</a>. <em>In a single Terraform run</em>, we would like to:</p>\n<ul>\n<li>install a Kubernetes cluster (using a <a href=\"https://devops-stack.io\">DevOps Stack</a> K3s Terraform module)</li>\n<li>install Argo CD on the the cluster using the <a href=\"https://registry.terraform.io/providers/hashicorp/helm/latest/docs\">Helm provider</a></li>\n<li>instantiate Argo CD resources (projects, applications, etc.) on this new Argo CD server.</li>\n</ul>\n<p>Our code will look like this:</p>\n<deckgo-highlight-code hcl   highlight-lines=\"\">\n          <code slot=\"code\"># Install Kubernetes &amp; Argo CD using a local module\n# (from https://devops-stack.io)\nmodule &quot;cluster&quot; {\n  source = &quot;git::https://github.com/camptocamp/devops-stack.git//modules/k3s/docker?ref=master&quot;\n\n  cluster_name = &quot;default&quot;\n  node_count   = 1\n}\n\n# /!\\ Setup the Argo CD provider dynamically\n# based on the cluster module&#39;s output\nprovider &quot;argocd&quot; {\n  server_addr = module.cluster.argocd_server\n  auth_token  = module.cluster.argocd_auth_token\n  insecure    = true\n  grpc_web    = true\n}\n\n# Deploy an Argo CD resource using the provider\nresource &quot;argocd_project&quot; &quot;demo_app&quot; {\n  metadata {\n    name      = &quot;demo-app&quot;\n    namespace = &quot;argocd&quot;\n  }\n\n  spec {\n    description  = &quot;Demo application project&quot;\n    source_repos = [&quot;*&quot;]\n\n    destination {\n      server    = &quot;https://kubernetes.default.svc&quot;\n      namespace = &quot;default&quot;\n    }\n\n    orphaned_resources {\n      warn = true\n    }\n  }\n\n  depends_on = [ module.cluster ]\n}</code>\n        </deckgo-highlight-code>\n<p>This requires to configure Argo CD dynamically, using the output of the Kubernetes cluster's resources.</p>\n<h1>Provider Initialization</h1>\n<p>Providers are initialized early in a Terraform run, as their initialization is required to compute the graph which defines in which order the resources are applied.</p>\n<p>This means it is actually not possible to make a provider initialize after a secondary resource is created.</p>\n<p>Officially, the story stops here, and Terraform has <a href=\"https://github.com/hashicorp/terraform/issues/24055\">a bug report</a> to track the feature allowing to dynamically configure providers.</p>\n<p>So… it's game over then? 🎮 👾\nNot really!</p>\n<h1>Leveraging Pointers</h1>\n<p>When a provider is configured in Terraform, it triggers a configuration function:</p>\n<deckgo-highlight-code go   highlight-lines=\"\">\n          <code slot=\"code\">func Provider() *schema.Provider {\n    return &amp;schema.Provider{\n      ConfigureFunc: func(d *schema.ResourceData) (interface{}, error) {\n        // Create someObject\n        return someObject, nil\n      }\n    }\n}</code>\n        </deckgo-highlight-code>\n<p>This\n<code>ConfigureFunc</code>\nmethod is usually used to create a static client for the target API. In the Argo CD provider for example, it returns a\n<code>ServerInterface</code>\nstructure, with pointers to several clients, instantiated from the provider parameters:</p>\n<deckgo-highlight-code go   highlight-lines=\"\">\n          <code slot=\"code\">type ServerInterface struct {                                                   \n    ApiClient            *apiclient.Client                                      \n    ApplicationClient    *application.ApplicationServiceClient                  \n    ClusterClient        *cluster.ClusterServiceClient                          \n    ProjectClient        *project.ProjectServiceClient                          \n    RepositoryClient     *repository.RepositoryServiceClient                    \n    RepoCredsClient      *repocreds.RepoCredsServiceClient                      \n    ServerVersion        *semver.Version                                        \n    ServerVersionMessage *version.VersionMessage                                                                                                              \n}</code>\n        </deckgo-highlight-code>\n<p>The return statement from the\n<code>ConfigureFunc</code>\neventually looks like this:</p>\n<deckgo-highlight-code go   highlight-lines=\"\">\n          <code slot=\"code\">return ServerInterface{                                                             \n    &amp;apiClient,                                                                     \n    &amp;applicationClient,                                                             \n    &amp;clusterClient,                                                                 \n    &amp;projectClient,                                                                 \n    &amp;repositoryClient,                                                              \n    &amp;repoCredsClient,                                                               \n    serverVersion,                                                                  \n    serverVersionMessage}, err</code>\n        </deckgo-highlight-code>\n<p>Let's add a new field to the\n<code>ServerInterface</code>\nto store the pointer to the provider's\n<code>ResourceData</code>\nobject, which gives access to the provider's parameters:</p>\n<deckgo-highlight-code go   highlight-lines=\"\">\n          <code slot=\"code\">type ServerInterface struct {                                                       \n    ApiClient            *apiclient.Client                                          \n    ApplicationClient    *application.ApplicationServiceClient                      \n    ClusterClient        *cluster.ClusterServiceClient                              \n    ProjectClient        *project.ProjectServiceClient                              \n    RepositoryClient     *repository.RepositoryServiceClient                        \n    RepoCredsClient      *repocreds.RepoCredsServiceClient                          \n    ServerVersion        *semver.Version                                            \n    ServerVersionMessage *version.VersionMessage                                    \n    ProviderData         *schema.ResourceData                                       \n}</code>\n        </deckgo-highlight-code>\n<p>Now in the\n<code>ConfigureFunc</code>\n, we'll instantiate the\n<code>ServerInterface</code>\n, providing only the\n<code>ProviderData</code>\npointer. The first resource that needs to use the provider will then instantiate the clients, when the provider parameters are available. We'll need the\n<code>ConfigureFunc</code>\nmethod to return a pointer to a\n<code>ServerInterface</code>\n, so we can later cache the clients and avoid recreating them for every resource:</p>\n<deckgo-highlight-code go   highlight-lines=\"\">\n          <code slot=\"code\">ConfigureFunc: func(d *schema.ResourceData) (interface{}, error) {                  \n    server := ServerInterface{ProviderData: d}                                      \n    return &amp;server, nil                                                             \n},</code>\n        </deckgo-highlight-code>\n<h1>Initialize the Clients</h1>\n<p>Now we need to actually initialize the clients in each resource.</p>\n<p>Each resource method gets the interface returned by the\n<code>ConfigureFunc</code>\nfunction as an empty interface parameter, usually called\n<code>meta</code>\n:</p>\n<deckgo-highlight-code go   highlight-lines=\"\">\n          <code slot=\"code\">func resourceArgoCDProjectCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {</code>\n        </deckgo-highlight-code>\n<p>These methods currently simply cast the\n<code>meta</code>\nparameter as a\n<code>ServerInterface</code>\nstructure and use the pre-initialized clients:</p>\n<deckgo-highlight-code go   highlight-lines=\"\">\n          <code slot=\"code\">server := meta.(ServerInterface)</code>\n        </deckgo-highlight-code>\n<p>We now need to cast\n<code>meta</code>\nas a pointer to a\n<code>ServerInterface</code>\nstructure instead (since we'll need to modify the clients from within the resources), and initialize the clients:</p>\n<deckgo-highlight-code go   highlight-lines=\"\">\n          <code slot=\"code\">server := meta.(*ServerInterface)                                                   \nif err := server.initClients(); err != nil {                                        \n    return []diag.Diagnostic{                                                       \n        diag.Diagnostic{                                                            \n            Severity: diag.Error,                                                   \n            Summary:  fmt.Sprintf(&quot;Failed to init clients&quot;),                        \n            Detail:   err.Error(),                                                  \n        },                                                                          \n    }                                                                           \n}</code>\n        </deckgo-highlight-code>\n<p>The\n<code>initClients()</code>\nmethod of the\n<code>ServerInterface</code>\nstructure will be called, allowing to set up the clients using the current provider parameters.</p>\n<h1>Client Pool Caching</h1>\n<p>In the\n<code>ServerInterface# initClients()</code>\nmethod, we want to make sure we reuse existing clients. This is rather simple, since each client is stored as a pointer in the structure, so it defaults to\n<code>nil</code>\n:</p>\n<deckgo-highlight-code go   highlight-lines=\"\">\n          <code slot=\"code\">func (p *ServerInterface) initClients() error {                                 \n    d := p.ProviderData                                                         \n                                                                                \n    if p.ApiClient == nil {                                                     \n        apiClient, err := initApiClient(d)                                      \n        if err != nil {                                                         \n            return err                                                          \n        }                                                                       \n        p.ApiClient = &amp;apiClient                                                \n    }\n\n    // etc for all clients\n\n    return nil\n}</code>\n        </deckgo-highlight-code>\n<h1>Conclusion</h1>\n<p>That's it, we're done. With these modifications,\n<code>terraform plan</code>\nnow works. The resources get applied in the proper order, and the outputs from the\n<code>cluster</code>\nmodule get properly passed as configuration to the Argo CD clients.</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/how-to-allow-dynamic-terraform-provider-configuration-20ik\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/immutability-loose-coupling-a-match-made-in-heaven-37kl/","relativePath":"posts/immutability-loose-coupling-a-match-made-in-heaven-37kl.md","relativeDir":"posts","base":"immutability-loose-coupling-a-match-made-in-heaven-37kl.md","name":"immutability-loose-coupling-a-match-made-in-heaven-37kl","frontmatter":{"title":"Immutability & loose coupling: a match made in heaven","stackbit_url_path":"posts/immutability-loose-coupling-a-match-made-in-heaven-37kl","date":"2021-03-18T07:31:29.616Z","excerpt":"Decoupling in container orchestration enables immutable infrastructure workflows.","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--otN7Kjk1--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zptftzr4jxlblx11pf8n.jpg","comments_count":0,"positive_reactions_count":1,"tags":["devops","containers","immutability","architecture"],"canonical_url":"https://www.camptocamp.com/en/news-events/immutability-and-loose-coupling-a-match-made-in-heaven","template":"post"},"html":"<p>When it comes to infrastructure and deployment automation, two opposite approaches share the podium: <a href=\"https://www.digitalocean.com/community/tutorials/what-is-immutable-infrastructure\">mutable vs immutable management</a>.</p>\n<h2>Mutable Systems</h2>\n<p>Mutable systems usually have a long life cycle, typically in the order\nof weeks to years. As their requirements change (new files,\nconfigurations, users, packages, etc.), the systems are modified to\nmatch a new target state. When left unmanaged, mutable systems tend to\ndrift away from their target state, in a <em>divergent</em> dynamic.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/11trbxkb1ev9goye0q2k.png\" alt=\"Convergence Models in Mutable Systems\"></p>\n<p>Automating mutable systems is often referred to as Configuration Management, and leverages tools such as <a href=\"https://cfengine.com/\">Cfengine</a>, <a href=\"https://puppet.com/\">Puppet</a>, <a href=\"https://www.chef.io/\">Chef</a>, or <a href=\"https://www.ansible.com/\">Ansible</a>. This tooling uses principles based on the concepts of target state, idempotence, and somewhat related to <a href=\"https://en.wikipedia.org/wiki/Promise_theory\">Mark Burgess’ Promise Theory</a>.  Configuration Management aims to make the system <em>convergent</em>, by running a tool on a regular basis, in order to resynchronize the system with its target state. Some of these tools (e.g.  <a href=\"https://github.com/purpleidea/mgmt\">mgmt</a>) also attempt to reach <em>congruence</em> by adopting a reactive approach, triggering corrective actions on events.</p>\n<h2>Immutable Systems</h2>\n<p>In an immutable system, any change requires a new deployment. Whether it be a change in configuration, new files, or new users, immutability demands that the system be destroyed and rebuilt from scratch.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hl5ewtyqpo5offh4p9jh.png\" alt=\"Trash full? Let&#x27;s move to a new house!\"></p>\n<p>An immutable approach can greatly simplify deployments. Avoiding convergence, immutable systems rely on artifacts that are built once, and can be deployed multiple times. These artifacts can increase trust in the ability to rebuild the system from scratch if necessary. It can also ease scalability, since the artifact can be precisely duplicated.</p>\n<p>However, immutability comes with a high cost: in order to be done properly, it must be strict. Any change to the state involves a complete replacement of the artifact. A lack of abiding to that rule results in a convergent system (at best), which cannot be managed in an immutable manner.</p>\n<h2>Mutable vs Immutable</h2>\n<p>If immutable systems are easier to maintain, what are the reasons for not using them?  The system’s complexity is probably the main justification. A traditional mutable system features multiple layers (Operating System, Middleware, Applications) that are usually strongly coupled. For example, the flavor and version of the Operating System define which version of a Middleware (e.g. Tomcat, Apache) is available for installation. In turn, the Middleware version defines which libraries are available for the Application. On most Unix-based systems, shared libraries are at the root of strong links between software versions, based on the underlying ABIs required to run them.</p>\n<p>If such strongly coupled systems are to be managed in an immutable manner, then the <em>whole system</em> is the immutable artifact. In the majority of organizations, this implies managing tens to thousands of artifacts, and rebuilding them from scratch on a regular basis. Such complexity is too much of a cost.</p>\n<p>Enters decoupling technologies: Over the years, new technologies have surfaced, which allow to decouple system components and ease their management in an immutable manner.</p>\n<h1>Virtualization and IaaS</h1>\n<p>With the rise of virtualization in the early years of the 21st century, it became easier to decouple the hardware from the Operating System. You could now size virtual machines as precisely as desired, in terms of CPU, memory, or disk, without adding or replacing any physical device.</p>\n<p>This unlocked access to a first level of automated immutability, using Virtual Machine images as the immutable artifact. Image generators (such as Hashicorp Packer) appeared, easing the generation of VM images.</p>\n<p>Provided the whole target state —including the OS, middleware and application itself— is built into the VM image, an immutable workflow can be used to manage it. In this case, whenever a change is required, the whole image needs to be rebuilt and redeployed to new VMs.</p>\n<figure>\n  <img alt=\"Golden Images are a common approach to divergent templating\" src=\"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0gu5n8xhs57jvxnkn3ay.jpg\" />\n  <figcaption>Golden Images are a common approach to divergent templating</figcaption>\n</figure>\n<p>For a time, this deployment could not be easily automated, as physical nodes still needed to be manually picked before deploying the VM images.  Infrastructure as a Service (IaaS), often referred to as “Cloud”, changed that.</p>\n<p>IaaS provided APIs on top of virtualization, and the IaaS system (e.g.  AWS EC2, OpenStack) would often pick the physical hypervisor node itself, making it possible to fully automate the deployment of new artifacts (such as VM images).</p>\n<p>One important problem remained to achieve immutable infrastructure in most situations: the complexity of the artifact itself.</p>\n<h1>Containers and Orchestrators</h1>\n<p>When setting up applications on existing systems, several issues often arise, among which are: packaging, configuration, and dependencies.</p>\n<p>For years, developers and systems engineers tried to solve the problem of application packaging and deployment using all kinds of package managers, from deb/rpm to homemade systems. This often failed, because these packages didn’t allow for multiple instantiation of the application, were not easy to configure, and were too tightly coupled to the rest of the system.</p>\n<p>Docker containers provided a unified way of packaging applications, in the form of OCI images, a rather unified way of configuring them (using environment variables or mounted files, in the <a href=\"https://12factor.net/\">12 factor app</a> fashion).</p>\n<p>But mainly, containers provided an abstraction level, a decoupling from the system. With containers, it doesn’t matter anymore which OS is running the container engine. Developers can now choose to run any version of Tomcat or Apache, on any node with a container engine. As a corollary, they can also run any combination of Middleware and Applications, regardless of the libraries provided on the underlying system.</p>\n<p>Furthermore, containers were made to be managed in an immutable manner, using OCI images as immutable artifacts. Every time a container needs to be modified, it requires the creation of a new container from a new image.</p>\n<p>The benefits are huge. With this decoupling of the Operating System from the Middleware and Applications, the monolithic immutable artifact that was previously managed as a VM image can now be broken down into many pieces: the application is now an immutable artifact, and so are the middleware components as well.</p>\n<p>Even better: since all the components running on the machines are now immutable, the machines themselves have now become totally neutral; all they require is a container engine in order to run containers.</p>\n<p>One thing can still make a node a snowflake: manually deployed containers, using tools such as Docker or Docker-compose.</p>\n<p>What IaaS did for VMs, Container orchestrators now do for containers: they provide an API to orchestrate the dynamic deployment of containers on a cluster of nodes.</p>\n<figure>\n  <img alt=\"Container Orchestration is to Containers what IaaS is to Virtual Machines\" src=\"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c4m11g0j9tcebfi9zlic.png\" />\n  <figcaption>Container Orchestration is to Containers what IaaS is to Virtual Machines.</figcaption>\n</figure>\n<p>The nodes are thus totally neutral now. As a result, their templates are greatly simplified, and they can now easily be managed in an immutable way. This new paradigm opened the way to new Operating Systems specialized for container orchestration, such as <a href=\"https://getfedora.org/en/coreos?stream=stable\">CoreOS</a> or <a href=\"https://rancher.com/docs/os/v1.x/en/\">RancherOS</a>, whose life cycles are meant to be managed with an immutable workflow.</p>\n<h1>Immutability &#x26; Convergence</h1>\n<p>Now that we have a full Immutable System, does it solve the problem of convergence?</p>\n<p>Immutable artifacts in themselves are not supposed to evolve, but their target state does evolve. In this regard, the situation with containers is similar to that of Golden Images for Virtual Machines: though the artifact is immutable, it can easily be used as a template leading to an unmaintained, divergent system.</p>\n<p>There is thus still a need for convergence tools in the container world.  However, this is not because the objects themselves drift from their target state. Rather, the target state evolves while the objects —supposedly— are stuck in their original state.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kyd2lcx01jxcgq1qiq6w.png\" alt=\"Convergence Models in Immutable Systems\"></p>\n<p>Where Configuration Management tools ensure a convergence of states for mutable systems such as VMs, packaging tools such as <a href=\"https://helm.sh/\">Helm</a> and <a href=\"https://github.com/roboll/helmfile\">Helmfile</a> can be used to periodically re-synchronize containers and other immutable objects with their target state.</p>\n<p>Finally, congruence is also achievable, with tools such as <a href=\"https://argoproj.github.io/argo-cd/\">Argo CD</a>. Argo CD not only deploys immutable objects to a Kubernetes cluster, but also keeps them synchronized with their last known target state, ensuring a continuous management.</p>\n<h1>Conclusion</h1>\n<p>Containers and Container Orchestrators enable fully immutable workflows for infrastructure, middleware and applications alike:</p>\n<ul>\n<li>\n<p>Nodes can be managed as virtual machines, with immutable VM images deployed dynamically using IaaS;</p>\n</li>\n<li>\n<p>Middleware and applications can be managed as containers, with immutable OCI images deployed dynamically using Container Orchestrators.</p>\n</li>\n</ul>\n<p>Are immutable systems always the best answer? As we’ve seen, the cost in artifact management and orchestration is far from negligible. Due to the transient nature that comes with their immutability, containers are better suited for stateless applications that easily scale on clusters of neutral nodes. For this reason, highly stateful applications with long life cycles, such as databases, are still better maintained as mutable systems most of the time.</p>\n<p>Choosing an immutable vs mutable architecture depends a lot on an organization’s software architecture and culture, and is not a light choice to make. Is immutable infrastructure the solution to your automation problems? Contact us, we can help you find out!</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/immutability-loose-coupling-a-match-made-in-heaven-37kl\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/how-to-manage-files-with-puppet-55e4/","relativePath":"posts/how-to-manage-files-with-puppet-55e4.md","relativeDir":"posts","base":"how-to-manage-files-with-puppet-55e4.md","name":"how-to-manage-files-with-puppet-55e4","frontmatter":{"title":"All the ways to manage files with Puppet","stackbit_url_path":"posts/how-to-manage-files-with-puppet-55e4","date":"2020-06-08T21:12:50.160Z","excerpt":"Puppet has many tools to manage configuration files. Knowing them can help you choose the one that best fits your needs.","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--X2kztkXc--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://www.camptocamp.com/wp-content/uploads/xformations_puppet1-720x400.png.pagespeed.ic.UU2oY1Zlj8.webp","comments_count":0,"positive_reactions_count":7,"tags":["puppet","cfgmgmt","tutorial","devops"],"canonical_url":"https://dev.to/camptocamp-ops/how-to-manage-files-with-puppet-55e4","template":"post"},"html":"<p>\"<a href=\"https://en.wikipedia.org/wiki/Everything_is_a_file\">Everything is a file</a>\" is a very famous Unix principle. And because of this, most of configuration management on Unix/Linux revolves around managing files.</p>\n<h1>Know your Tools <a name=\"tools\"></a></h1>\n<p>Puppet, as a configuration management tool, is no exception to this. As a consequence, there are many ways to manage configuration files with Puppet. They all have a reason to exist, and a purpose to fulfill.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/rd2gpnxaq87rvgl5vz0j.jpg\" alt=\"Know your tools\"></p>\n<p>Knowing your tools is the object of this blog post, with the following topics:</p>\n<ul>\n<li><a href=\"#%20tools\">Know your Tools</a></li>\n<li><a href=\"#%20approaches\">File Management Approaches</a></li>\n<li>\n<p><a href=\"#%20whole-cfg\">Managing Whole Configurations</a></p>\n<ul>\n<li><a href=\"#%20whole-static\">Static Content</a></li>\n<li><a href=\"#%20whole-static-scripts\">Deploying Scripts and Binary Data</a></li>\n<li><a href=\"#%20whole-static-source\">Beware of Source</a></li>\n<li><a href=\"#%20whole-dynamic\">Dynamic Content</a></li>\n<li><a href=\"#%20whole-dynamic-purge\">Purgeable Types</a></li>\n<li><a href=\"#%20whole-dynamic-onescope\">Content from One Scope</a></li>\n<li>\n<p><a href=\"#%20whole-dynamic-multiscope\">Content from Multiple Scopes</a></p>\n<ul>\n<li><a href=\"#%20whole-dynamic-multiscope-includes\">Includes</a></li>\n<li><a href=\"#%20whole-dynamic-multiscope-concat\">Concat</a></li>\n</ul>\n</li>\n<li><a href=\"#%20whole-dynamic-example\">Content from an Example File</a></li>\n</ul>\n</li>\n<li>\n<p><a href=\"#%20partial-cfg\">Managing Partial Configurations</a></p>\n<ul>\n<li><a href=\"#%20partial-native\">Native Types</a></li>\n<li><a href=\"#%20partial-includes\">Includes</a></li>\n<li><a href=\"#%20partial-augeas\">The Augeas Type</a></li>\n<li><a href=\"#%20partial-fileline\">File line</a></li>\n</ul>\n</li>\n<li><a href=\"#%20conclusion\">Conclusion</a></li>\n</ul>\n<h1>File Management Approaches <a name=\"approaches\"></a></h1>\n<p>Let's start with a first choice when managing files:</p>\n<ul>\n<li>managing the whole configuration</li>\n<li>managing partial configurations</li>\n</ul>\n<p>Many practitioners I've met consider that managing whole configurations is the only acceptable way of proceeding, since managing partial configuration does not lead to a predictable final state. </p>\n<p>However, managing whole configurations often leads to managing defaults which were carefully chosen for your GNU/Linux distribution and were perfectly fine to keep. It also leads to maintaining lots of different defaults in modules to try and stay as close as possible to the distribution standards when using default values.</p>\n<p>So both approaches have pros and cons.</p>\n<h1>Managing Whole Configurations <a name=\"whole-cfg\"></a></h1>\n<p>This is the approach you take when you want to control the full content of the software configuration. This does not mean however that everything has to fit into a single file; the configuration might be split, and splitting often makes configuration management more flexible.</p>\n<h2>Static Content <a name=\"whole-static\"></a></h2>\n<p>The easiest case is without doubt managing static content, when your file is always the same.</p>\n<p>However simple this might seem, there can still be tricks.</p>\n<h3>Deploying Scripts and Binary Data <a name=\"whole-static-scripts\"></a></h3>\n<p>For example, scripts and binary blobs can easily be managed this way in Puppet, and we often see code such as:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">file { &#39;/usr/local/bin/myscript.sh&#39;:\n  ensure =&gt; file,\n  source =&gt; &quot;puppet:///modules/${module_name}/myscript.sh&quot;,\n}</code>\n        </deckgo-highlight-code>\n<p>That works fine, but if you're trying to deploy a software (maybe even with a tarball, using the <a href=\"https://forge.puppet.com/puppet/archive\">\n<code>puppet-archive</code>\n</a> module), it's probably better to package it for your distribution and use your package manager (apt/yum/etc.) as the deployment layer. You'll get a much simpler Puppet code, usually better performance, and you'll rely on the package manager's metadata for idempotence:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">package { &#39;myscript&#39;:\n  ensure =&gt; present,\n}</code>\n        </deckgo-highlight-code>\n<p>Another possibility is using the <a href=\"https://forge.puppet.com/puppetlabs/vcsrepo\">\n<code>puppetlabs-vcsrepo</code>\n</a> resource type. The VCS (e.g. Git) will then provide the metadata to ensure idempotence, e.g.:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">vcsrepo {&#39;/usr/src/ceph-rbd-backup&#39;:\n  ensure   =&gt; latest,\n  provider =&gt; &#39;git&#39;,\n  source   =&gt; &#39;https://github.com/camptocamp/ceph-rbd-backup&#39;,\n  revision =&gt; &#39;master&#39;,\n}</code>\n        </deckgo-highlight-code>\n<h3>Beware of Source <a name=\"whole-static-source\"></a></h3>\n<p>When using the\n<code>file</code>\ntype to deploy static content, it is quite common to use the\n<code>source</code>\nattribute to specify the file to copy:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">file { &#39;/srv/foo&#39;:\n  ensure =&gt; file,\n  source =&gt; &quot;puppet:///modules/${module_name}/foo&quot;,\n}</code>\n        </deckgo-highlight-code>\n<p>This makes use of the Puppetserver's <a href=\"https://puppet.com/docs/puppet/latest/config_file_fileserver.html\">fileserver feature</a>. When using this syntax, every Puppet run will result in a\n<code>file_metadata</code>\nHTTP request to the Puppetserver for each file managed, just to get the metadata necessary to decide whether the file needs to be replaced or not.</p>\n<p>When many files are managed this way on many agents, this results in lots of HTTP requests being made during catalog application, which will saturate the Puppetserver's JRuby threads and prevent them from processing catalog compilations and reports instead.</p>\n<p>Instead, when deploying non-binary content, you can use the\n<code>file()</code>\nfunction with a relative path:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">file { &#39;/srv/foo&#39;:\n  ensure  =&gt; file,\n  content =&gt; file(&quot;${module_name}/foo&quot;),\n}</code>\n        </deckgo-highlight-code>\n<p>This is totally equivalent to the previous syntax, except the whole file will be included in the catalog, instead of just a pointer to the fileserver.</p>\n<h2>Dynamic Content <a name=\"whole-dynamic\"></a></h2>\n<p>Very often, static content is not enough to configure your software. You need variables, and a more flexible approach.</p>\n<h3>Purgeable Types <a name=\"whole-dynamic-purge\"></a></h3>\n<p>Native Puppet Ruby types are probably the most flexible way of managing a configuration, as they provide a very fine-grained interface to edit configuration files.</p>\n<p>However, they do not manage the whole configuration by default. That is, unless you can use purging with them.</p>\n<p>Purging resources in Puppet requires two conditions:</p>\n<ul>\n<li>a type which supports listing instances (at least one provider has a\n<code>self.instances</code>\nmethod defined)</li>\n<li>a parameter that can ensure the resource's absence</li>\n</ul>\n<p>When both these conditions are met, Puppet can purge the resources it doesn't explicitly manage by:</p>\n<ul>\n<li>listing all known resources (using the\n<code>self.instances</code>\nmethod)</li>\n<li>setting all of them to be absent by default</li>\n<li>overriding the presence with the catalog's explicit resource parameters</li>\n</ul>\n<p>There are two main ways of achieving this:</p>\n<ul>\n<li>using the standard\n<code>resources</code>\ntype</li>\n<li>using the <a href=\"https://forge.puppet.com/crayfishx/purge\">\n<code>crayfishx-purge</code>\n</a> module</li>\n</ul>\n<p>The\n<code>resources</code>\ntype fits basic needs, by allowing to purge all resources not managed by Puppet. For example:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">host { &#39;localhost&#39;:\n  ensure =&gt; present,\n  ip     =&gt; &#39;127.0.0.1&#39;,\n}\n\nresources { &#39;host&#39;:\n  purge =&gt; true,\n}</code>\n        </deckgo-highlight-code>\n<p>will purge all entries in\n<code>/etc/hosts</code>\nexcept for localhost.</p>\n<p>The\n<code>resources</code>\nresource type also allows to set exceptions, though only for the\n<code>user</code>\ntype:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">resources { &#39;user&#39;:\n  purge              =&gt; true,\n  unless_system_user =&gt; true,\n}</code>\n        </deckgo-highlight-code>\n<p>This is a hard limitation, which the <a href=\"https://forge.puppet.com/crayfishx/purge\">\n<code>purge</code>\ntype</a> fixes by providing a more flexible interface, allowing to set:</p>\n<ul>\n<li>fine conditions for purging resources</li>\n<li>which parameter and value to use for purging.</li>\n</ul>\n<p>For example:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">purge { &#39;mount&#39;:\n  state  =&gt; unmounted,\n  unless =&gt; [&#39;name&#39;, &#39;==&#39;, [&#39;/&#39;, &#39;/var&#39;, &#39;/home&#39;]],\n}</code>\n        </deckgo-highlight-code>\n<p>will unmount all file systems that are not managed by Puppet, unless they are mounted on\n<code>/</code>\n,\n<code>/var</code>\nor\n<code>/home</code>\n.</p>\n<p>In order to manage configurations in full, the\n<code>purge</code>\ntype can be used with native types that manage configuration file stanzas and know how to list instances.</p>\n<p>This is the case of:</p>\n<ul>\n<li>the <a href=\"https://puppet.com/docs/puppet/5.5/types/host.html\">\n<code>host</code>\ntype</a></li>\n<li>the <a href=\"https://puppet.com/docs/puppet/5.5/types/mailalias.html\">\n<code>mailalias</code>\ntype</a></li>\n<li>most <a href=\"http://augeasproviders.com/\">Augeasproviders types</a></li>\n</ul>\n<p>For example, you can manage\n<code>sshd_config</code>\nin full using the <a href=\"https://forge.puppet.com/herculesteam/augeasproviders_ssh\">\n<code>herculesteam-augeasproviders_ssh</code>\n</a> module with code such as:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">sshd_config {\n  &#39;X11Forwarding&#39;:\n    value =&gt; &#39;yes&#39;,\n    ;\n\n  &#39;UsePAM&#39;:\n    value =&gt; &#39;no&#39;,\n    ;\n}\n\npurge { &#39;sshd_config&#39;: }</code>\n        </deckgo-highlight-code>\n<h3>Content from One Scope <a name=\"whole-dynamic-onescope\"></a></h3>\n<p>When there are no purgeable types for your configuration file type, and you need to manage the content from a single scope (a single Puppet class), the most obvious option is to use a simple\n<code>file</code>\nresource with a template. Prefer <a href=\"https://puppet.com/docs/puppet/5.5/lang_template_epp.html\">EPP templates</a> these days, as they are easier and safer than ERB templates:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">file { &#39;/path/to/foo&#39;:\n  ensure  =&gt; file,\n  content =&gt; epp(\n    &quot;${module_name}/foo.epp&quot;,\n    {\n      var1 =&gt; &#39;value1&#39;,\n      var2 =&gt; &#39;value2&#39;,\n    }\n  ),\n}</code>\n        </deckgo-highlight-code>\n<h3>Content from Multiple Scopes <a name=\"whole-dynamic-multiscope\"></a></h3>\n<p>When your content needs to come from multiple scopes, a single\n<code>file</code>\nresource won't suffice.</p>\n<h4>Includes <a name=\"whole-dynamic-multiscope-includes\"></a></h4>\n<p>If you're lucky and your configuration format supports include statements, this is the easiest way to go. For example:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\"># Deploy a static file to perform the inclusion\nfile { &#39;/etc/sudoers&#39;:\n  ensure  =&gt; file,\n  content =&gt; &#39;# includedir /etc/sudoers.d&#39;,\n}\n\n# Deploy each rule as a separate file in the directory\nfile { &#39;/etc/sudoers.d/defaults_env&#39;:\n  ensure  =&gt; file,\n  content =&gt; &#39;Defaults env_reset&#39;,\n}\n\nfile { &#39;/etc/sudoers.d/foo&#39;:\n  ensure  =&gt; file,\n  content =&gt; &#39;foo ALL=(ALL:ALL) ALL&#39;,\n}\n\n# Let Puppet purge the directory of all unknown files\nfile { &#39;/etc/sudoers.d&#39;:\n  ensure =&gt; directory,\n  purge  =&gt; true,\n}</code>\n        </deckgo-highlight-code>\n<h4>Concat <a name=\"whole-dynamic-multiscope-concat\"></a></h4>\n<p>Many configuration formats don't support includes: everything has to be in a single file. Managing such a file from multiple scopes requires the use of a concat module.</p>\n<p>The most used concat module is the official <a href=\"https://forge.puppet.com/puppetlabs/concat\">\n<code>puppetlabs-concat</code>\n</a>. It lets you declare a target file where all fragments will be concatenated, and then deploy multiple fragments tagged for this target. For example, the sudoers example above is roughly equivalent to:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">concat { &#39;/etc/sudoers&#39;:\n  ensure =&gt; present,\n}\n\nconcat::fragment { &#39;defaults_env&#39;:\n  target  =&gt; &#39;/etc/sudoers&#39;,\n  content =&gt; &#39;Defaults env_reset&#39;,\n  order   =&gt; &#39;01&#39;,\n}\n\nconcat::fragment { &#39;foo&#39;:\n  target  =&gt; &#39;/etc/sudoers&#39;,\n  content =&gt; &#39;foo ALL=(ALL:ALL) ALL&#39;,\n  order   =&gt; &#39;10&#39;,\n}</code>\n        </deckgo-highlight-code>\n<p>Each fragment is deployed separately to the agent, then concatenated to generate the final file.</p>\n<h3>Content from an Example File <a name=\"whole-dynamic-example\"></a></h3>\n<p>A few years ago, I experimented with yet another option to manage full dynamic content, without losing the benefit of sane distribution defaults.</p>\n<p>The <a href=\"https://forge.puppet.com/camptocamp/augeas_file\">\n<code>camptocamp-augeas_file</code>\n</a> resource type allows to use a local file on the Puppet agent as a template on which Augeas changes are applied to generate the final file:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">augeas_file { &#39;/etc/apt/sources.list.d/jessie.list&#39;:\n  lens    =&gt; &#39;Aptsources.lns&#39;,\n  base    =&gt; &#39;/usr/share/doc/apt/examples/sources.list&#39;,\n  changes =&gt; [&#39;setm ./*[distribution] distribution jessie&#39;],\n}</code>\n        </deckgo-highlight-code>\n<p>Every time the Puppet agent runs, it will use\n<code>/usr/share/doc/apt/examples/sources.list</code>\nas a template and apply the\n<code>changes</code>\nusing Augeas to generate\n<code>/etc/apt/sources.list.d/jessie.list</code>\n. The target file is only written if any changes occur, making it idempotent. If the template changes (e.g. after a package upgrade), the target will be regenerated.</p>\n<h1>Managing Partial Configurations <a name=\"partial-cfg\"></a></h1>\n<p>Partial configurations have less options to be managed. They're essentially light versions of the options cited above:</p>\n<h2>Native types <a name=\"partial-native\"></a></h2>\n<p>Just as <a href=\"#%20whole-dynamic-purge\">for full configurations</a>, you can use native puppet types (\n<code>host</code>\n,\n<code>mailalias</code>\n, Augeasproviders types, etc.), without purging them.</p>\n<p>In addition, since you don't mind managing files partially, you can also use types which don't support purging, such as <a href=\"https://forge.puppet.com/puppetlabs/inifile\">\n<code>ini_setting</code>\n</a> for INI file types:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">ini_setting { &quot;sample setting&quot;:\n  ensure  =&gt; present,\n  path    =&gt; &#39;/tmp/foo.ini&#39;,\n  section =&gt; &#39;bar&#39;,\n  setting =&gt; &#39;baz&#39;,\n  value   =&gt; &#39;quux&#39;,\n}</code>\n        </deckgo-highlight-code>\n<p>or the <a href=\"https://forge.puppet.com/herculesteam/augeasproviders_shellvar\">\n<code>shellvar</code>\ntype</a> for shell configuration files:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">shellvar { &quot;ntpd options&quot;:\n  ensure   =&gt; present,\n  target   =&gt; &quot;/etc/sysconfig/ntpd&quot;,\n  variable =&gt; &quot;OPTIONS&quot;,\n  value    =&gt; &quot;-g -x -c /etc/myntp.conf&quot;,\n}</code>\n        </deckgo-highlight-code>\n<h2>Includes <a name=\"partial-includes\"></a></h2>\n<p>Includes work just as for <a href=\"#%20whole-dynamic-multiscope-includes\">whole configurations</a>, but without purging the directory.</p>\n<h2>The Augeas Type <a name=\"partial-augeas\"></a></h2>\n<p>If no Augeasproviders exists for your resource type, but Augeas has an <a href=\"https://augeas.net/stock_lenses.html\">available lens</a> for your configuration format, then you can most likely use the <a href=\"https://puppet.com/docs/puppet/5.5/types/augeas.html\">\n<code>augeas</code>\nresource type</a> to manipulate it.</p>\n<p>This is often used to manipulate XML configurations for example:</p>\n<deckgo-highlight-code puppet   highlight-lines=\"\">\n          <code slot=\"code\">augeas {&#39;foo.xml&#39;:\n  incl    =&gt; &#39;/tmp/foo.xml&#39;,\n  context =&gt; &#39;/files/tmp/foo.xml/foo&#39;,\n  lens    =&gt; &#39;Xml.lns&#39;,\n  changes =&gt; [\n    &#39;set bar/# text herp&#39;,\n  ],\n}</code>\n        </deckgo-highlight-code>\n<h2>File_line <a name=\"partial-fileline\"></a></h2>\n<p>I've kept\n<code>file_line</code>\nfor the end of this list, because this is really the last option you might want to consider (just like\n<code>exec</code>\n) since has many downfalls.</p>\n<p>However, if you got this far, you're probably either:</p>\n<ul>\n<li>trying to patch a packaged software, which is a very nasty thing to do; it's much better to repackage it properly (and send the patch to the maintainer 😁, that's how Open-Source works!)</li>\n<li>edit a weird configuration file such as\n<code>.bashrc</code>\n(which the\n<code>shellvar</code>\ntype usually parses rather fine) or some kind of PHP or Perl configuration… I don't envy you if you have no option of using templates/concat for that!</li>\n</ul>\n<h1>Conclusion <a name=\"conclusion\"></a></h1>\n<p>There many tools to manage files in Puppet.</p>\n<p>Do you have other modules/resource types you like to use for this? Let me know in the comments!</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/how-to-manage-files-with-puppet-55e4\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/simple-secret-sharing-with-gopass-and-summon-40jk/","relativePath":"posts/simple-secret-sharing-with-gopass-and-summon-40jk.md","relativeDir":"posts","base":"simple-secret-sharing-with-gopass-and-summon-40jk.md","name":"simple-secret-sharing-with-gopass-and-summon-40jk","frontmatter":{"title":"Simple secret sharing with gopass and summon","stackbit_url_path":"posts/simple-secret-sharing-with-gopass-and-summon-40jk","date":"2020-07-28T16:34:57.621Z","excerpt":"Storing and sharing secrets doesn't have to be complex","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--t2vDCZ31--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/gl8147l5s2ky6po9tdfi.png","comments_count":0,"positive_reactions_count":10,"tags":["security","devops","showdev","opensource"],"canonical_url":"https://dev.to/camptocamp-ops/simple-secret-sharing-with-gopass-and-summon-40jk","template":"post"},"html":"<p>Secrets are a fundamental, yet complex issue in software deployment.</p>\n<p>Solutions such as <a href=\"https://www.keepassx.org/\">KeepassX</a> are simple to use, but quite impractical when it comes to automation.</p>\n<p>More complex options like <a href=\"https://www.vaultproject.io/\">Hashicorp Vault</a> are extremely powerful, but harder to set up and maintain.</p>\n<h1>Pass: a simple solution</h1>\n<p>When it comes to storing securely and sharing passwords in a team, it is hard to come up with a more simple and efficient solution than Git and GnuPG combined.</p>\n<p><a href=\"https://www.passwordstore.org/\">Pass</a> is a shell script that does just that. Inside a Git repository, Pass stores passwords in individual files encrypted for all private GnuPG keys in the team. It features a CLI to manipulate passwords, add new entries, or search through existing passwords.</p>\n<h1>More features</h1>\n<p>However, Pass is quite limited in its features, so another project was born a few years later, to provide a new Go implementation of the Pass standard. Its name: simply <a href=\"https://github.com/gopasspw/gopass\">Gopass</a>.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/1cs4zelpc542tzc2ligm.png\" alt=\"Gopass Logo\"></p>\n<h2>Installing</h2>\n<p>Gopass is provided as binaries you can download from the releases page <a href=\"https://github.com/gopasspw/gopass/releases\">on GitHub</a>.</p>\n<h2>Features</h2>\n<p>Here are some of the features that make Gopass a great tool.</p>\n<h3>Multiple mounts</h3>\n<p>While\n<code>pass</code>\nallows you to have a single Git repository with your passwords, Gopass lets you create multiple repositories called \"mounts\", which is very useful when you want to share different secrets with different people:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ gopass mounts\ngopass (/home/raphink/.password-store)\n├── c2c (/home/raphink/.password-store-c2c)\n├── perso (/home/raphink/.password-store-perso)\n└── terraform (/home/raphink/.password-store-terraform)</code>\n        </deckgo-highlight-code>\n<p>Gopass uses a prefix to access secrets in mounts, so\n<code>terraform/puppet/c2c</code>\nactually refers to the secret stored in\n<code>/home/raphink/.password-store-terraform/puppet/c2c.gpg</code>\n.</p>\n<h3>Multiple users</h3>\n<p>Each Git repository can be set to encrypt passwords for multiple GnuPG keys.</p>\n<p>The\n<code>.gpg-id</code>\nfile at the root of each repository contains the list of public keys to use for encryption, and the\n<code>.public-keys/</code>\ndirectory keeps a copy of each key, making it easy for collaborators to import them into their keyring before they can start encrypting passwords for the team.</p>\n<h3>Fuzzy search</h3>\n<p>Gopass helps you find entries when the key you gave it doesn't match an exact known path:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ gopass jenkins\n[ 0] c2c/c2c_aws/jenkins-c2c\n[ 1] c2c/c2c_mgmtsrv/freeipa/c2c-jenkins-swarm\n[ 2] c2c/c2c_mgmtsrv/freeipa/jenkins-test-users\n[ 3] perso/Devel/jenkins-ci.org\n[ 4] terraform/aws/jenkins-c2c\n\nFound secrets - Please select an entry [0]: </code>\n        </deckgo-highlight-code>\n<h3>Structured secrets</h3>\n<p>When decrypting a password, Gopass parses the content into two different parts: a password and a YAML document. For example, the content of a secret could look like this:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">foo\n---\nkey1: value1\nanother_key:\n  bar: baz</code>\n        </deckgo-highlight-code>\n<h4>Password</h4>\n<p>The first line of the content is the\n<code>password</code>\n. If this is all you're interested in, you can use\n<code>gopass show --password</code>\nto retrieve it:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ gopass show --password perso/test\nfoo</code>\n        </deckgo-highlight-code>\n<h4>Querying keys</h4>\n<p>When the second part of the content (lines 2 and following) is a valid YAML document, you can query these values by providing a key, for example:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ gopass show perso/test key1\nvalue1</code>\n        </deckgo-highlight-code>\n<p>Starting with Gopass 1.9.3, you can also query subkeys using either a dot or slash notation:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ gopass show perso/test another_key.bar\nbaz\n$ gopass show perso/test /another_key/bar\nbaz</code>\n        </deckgo-highlight-code>\n<p>This makes it extremely powerful to store several fields in the same secret.</p>\n<h3>TOTP</h3>\n<p>Gopass allows to store TOTP keys alongside passwords. For example, you can have the following secret, stored at\n<code>terraform/service.io/api</code>\n:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">WPTmU`\n&gt;b&lt;Y31\n---\npassword: &#39;WPTmU\n`&gt;b&lt;Y31&#39;\ntotp: &#39;PIJ6AIHETAHSHOO7SHEI1AEK6IH1SOOCHATUOSH8XUAN0OOTH9XAHRUXO4AHJAEVI&#39;\nurl: https://myservice.io\nusername: jdoe</code>\n        </deckgo-highlight-code>\n<p>In addition to retrieving each field with the corresponding key, you can also generate TOTP tokens with\n<code>gopass totp</code>\n:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ gopass totp terraform/service.io/api\n568000 lasts 18s \t|------------==================|</code>\n        </deckgo-highlight-code>\n<h2>Integrations</h2>\n<p>Gopass can be easily integrated into projects for deployments or CI/CD tasks.</p>\n<h3>Summon</h3>\n<p>The easiest way to integrate Gopass is probably to use <a href=\"https://github.com/cyberark/summon\">Summon</a>.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/gy0cw2iqwdb85lobl5jl.png\" alt=\"Summon logo\"></p>\n<p>Summon is a tool which dynamically exposes environment variables with values retrieved from various secret stores.\n<code>gopass</code>\nis one of its possible providers.</p>\n<h4>Setup</h4>\n<p>Setting it up to use\n<code>gopass</code>\nis rather straightforward. We use a simple wrapper called\n<code>summon-gopass</code>\n, which needs to be in your PATH:</p>\n<deckgo-highlight-code bash   highlight-lines=\"\">\n          <code slot=\"code\"># !/bin/sh\ngopass show $(echo &quot;${@}&quot;|tr : \\ )</code>\n        </deckgo-highlight-code>\n<p>You can also simply make\n<code>summon-gopass</code>\na symbolic link to your\n<code>gopass</code>\nbinary, but subkeys won't work in this case.</p>\n<h4>Usage</h4>\n<p>Summon lets you provide a local\n<code>secrets.yml</code>\nfile which defines which environment variables you wish to define, and how to find the values.</p>\n<p>Here's a simple example of a\n<code>secrets.yml</code>\nfile using the secret we defined earlier:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">SERVICE_URL: !var terraform/service.io/api url\nUSER: !var terraform/service.io/api username\nSERVICE_PASSWORD: !var terraform/service.io/api password</code>\n        </deckgo-highlight-code>\n<p>You can test this setup by running the following command in the directory containing\n<code>secrets.yml</code>\n:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ summon env</code>\n        </deckgo-highlight-code>\n<p>The output should contain the 3 variables with the values stored in Gopass.</p>\n<h4>Exposing files</h4>\n<p>While the format above allows you to expose simple secrets as variables, it is not very practical when you need secrets exposed as files.</p>\n<p>Summon covers this need however, using the\n<code>file</code>\nflag. For example:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">SSH_KEY: !var:file terraform/service.io/ssh private_key</code>\n        </deckgo-highlight-code>\n<p>If\n<code>terraform/service.io/ssh</code>\nis a secret in Gopass whose\n<code>private_key</code>\nYAML field contains an SSH private key, then Summon will extract this secret, place it into a temporary file (in\n<code>/dev/shm</code>\nby default) and set the\n<code>SSH_KEY</code>\nvariable with the path to the file. After the command returns, the temporary file will be delete.</p>\n<p>You could then use such a\n<code>secrets.yml</code>\nfile with:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">summon sh -c &#39;ssh -i $SSH_KEY user@service&#39;</code>\n        </deckgo-highlight-code>\n<p>Another useful example is to store a Kubernetes cluster configuration in Gopass, e.g.:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">---\napiVersion: v1\nclusters:\n    - cluster:\n        server: https://k8s.example.com\n      name: k8s\ncontexts:\n    - context:\n        cluster: k8s\n        namespace: default\n        user: default-cluster-admin\n      name: default-admin\ncurrent-context: default-admin\nkind: Config\npreferences: {}\nusers:\n    - name: default-cluster-admin\n      user:\n        token: averylongtoken</code>\n        </deckgo-highlight-code>\n<p>With the following\n<code>secrets.yml</code>\nfile:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">KUBECONFIG: !var:file path/to/secret</code>\n        </deckgo-highlight-code>\n<p>You can then work on the Kubernetes cluster with\n<code>kubectl</code>\nusing:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ summon kubectl &lt;some command&gt;</code>\n        </deckgo-highlight-code>\n<h3>Terraform integration</h3>\n<p>A simple way to pass variables to Terraform is to declare them and use\n<code>summon</code>\nto pass the values:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">TF_VAR_var1: !var terraform/project1/secret1 field1</code>\n        </deckgo-highlight-code>\n<p>You can then run\n<code>summon terraform</code>\nto dynamically pass these secrets to Terraform.</p>\n<p>Another possibility is to use <a href=\"https://github.com/camptocamp/terraform-provider-pass\">Camptocamp's Terraform Pass Provider</a> which lets you retrieve and set passwords in Gopass natively in Terraform:</p>\n<deckgo-highlight-code hcl   highlight-lines=\"\">\n          <code slot=\"code\">provider &quot;pass&quot; {\n  store_dir = &quot;/srv/password-store&quot;    # defaults to $PASSWORD_STORE_DIR\n  refresh_store = false                # do not call `\ngit pull\n`\n}\n\n# Store a value into the Gopass store\nresource &quot;pass_password&quot; &quot;test&quot; {\n  path = &quot;secret/foo&quot;\n  password = &quot;0123456789&quot;\n  data = {\n    zip = &quot;zap&quot;\n  }\n}\n\n# Retrieve password at another_secret/bar to be used in Terraform code\ndata &quot;pass_password&quot; &quot;test&quot; {\n  path = &quot;another_secret/bar&quot;\n}</code>\n        </deckgo-highlight-code>\n<p>The provider exposes the secret with the following properties:</p>\n<ul>\n<li></li>\n</ul>\n<p><code>path</code>\n: path to the secret</p>\n<ul>\n<li></li>\n</ul>\n<p><code>password</code>\n: secret password (first line of the content)</p>\n<ul>\n<li></li>\n</ul>\n<p><code>data</code>\n: a structure (map) of the YAML data in the content</p>\n<ul>\n<li></li>\n</ul>\n<p><code>body</code>\n: the content found on lines 2 and following, if it could not be parsed as YAML </p>\n<ul>\n<li></li>\n</ul>\n<p><code>full</code>\n: the full content (all lines) of the secret</p>\n<h3>Hiera Integration</h3>\n<p>The most standard way to store secrets in Hiera is to use <a href=\"https://github.com/voxpupuli/hiera-eyaml\">\n<code>hiera-eyaml</code>\n</a>, which stores secret values encrypted inside YAML files, using either a PKCS7 key (default) or multiple GnuPG keys (when using <a href=\"https://github.com/voxpupuli/hiera-eyaml-gpg\">\n<code>hiera-eyaml-gpg</code>\n</a>).</p>\n<p>If your passwords are already stored in Gopass, you might want to integrate that into Hiera instead.</p>\n<p>The <a href=\"https://github.com/camptocamp/hiera-pass\">\n<code>camptocamp/hiera-pass</code>\nmodule</a> provides two Hiera backends to retrieve keys either as full Gopass secrets, or as keys inside the secrets.</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/simple-secret-sharing-with-gopass-and-summon-40jk\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/taming-puppetserver-6-a-grafana-story-3c4f/","relativePath":"posts/taming-puppetserver-6-a-grafana-story-3c4f.md","relativeDir":"posts","base":"taming-puppetserver-6-a-grafana-story-3c4f.md","name":"taming-puppetserver-6-a-grafana-story-3c4f","frontmatter":{"title":"Taming Puppetserver 6: a Grafana story","stackbit_url_path":"posts/taming-puppetserver-6-a-grafana-story-3c4f","date":"2020-05-13T08:32:41.525Z","excerpt":"Using Grafana & Catalog Diff to tune the Puppet Server","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--UmzPee8A--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/t2lnmj23y7z3cvgo0b1t.png","comments_count":0,"positive_reactions_count":6,"tags":["puppet","observability","containers","kubernetes"],"canonical_url":"https://dev.to/camptocamp-ops/taming-puppetserver-6-a-grafana-story-3c4f","template":"post"},"html":"<p>After some time preparing for the migration, yesterday was finally the time to switch our first production Puppetserver to Puppet 6.</p>\n<p>Everything was ready: we had been running both versions of the server alongside each other for some time, <a href=\"https://dev.to/camptocamp-ops/automated-puppet-impact-analysis-1c1\">performing catalog diffs</a>, and nothing seemed to be getting in the way as I went into ArgoCD and deployed the new version of the stack.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/4yegip5ggdh0k7npdz8b.png\" alt=\"Deploying the Puppetserver in ArgoCD\"></p>\n<p>The first 30 minutes went fine. But then catalogs started failing compilation, and other services colocated on the OpenShift cluster became slow.</p>\n<h1>The Problem</h1>\n<p>In retrospect, I should have known something was wrong. Two weeks ago when I started my tests with Puppet 6 on another platform, I noticed that the server would die of OOM quite rapidly. Our Puppetserver 5 pods had been running happily for years with the following settings:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">max_active_instances: &#39;4&#39;\njava_xmx: &#39;2g&#39;\n\nrequests:\n  cpu: 10m\n  memory: 2Gi\nlimits:\n  cpu: 3\n  memory: 4Gi</code>\n        </deckgo-highlight-code>\n<p>When I started new Puppet 6 instances with these settings, they would die. Initially, I thought the Xmx wasn't high enough so I set it to\n<code>3g</code>\nand everything seemed fine again, for the duration of the catalog-diff tests.</p>\n<p>But when the servers started crashing in production yesterday, it was clear there was another problem. And upgrading the Xmx to a higher value didn't help.</p>\n<p>So we looked at the graphs.</p>\n<h1>The Graphs</h1>\n<p>For years, we have been gathering internal metrics from our Puppetservers. Our <a href=\"https://github.com/camptocamp/charts/tree/master/puppetserver\">Puppetserver Helm chart</a> comes equipped with a JMX Exporter pod to gather the data and send them to Prometheus. We then have a Grafana dashboard presenting all the Puppetserver metrics in a useful manner.</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/wxq2pi5b4jn5knp4pn71.png\" alt=\"Puppetserver metrics in Grafana\"></p>\n<p>Looking at the graphs from even before the switch showed that something was indeed aloof. </p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/63f6ewg4uheojxp1abv3.png\" alt=\"JVM Metrics for Puppetserver 5 and 6\"></p>\n<p>Clearly, the Puppetserver 6 instances had a very different memory ramp up (blue and yellow \"teeth\" lines), even during the tests phase. I just hadn't noticed then.</p>\n<p>We started a series of tests, using Puppet Catalog Diff runs to test load the servers, and playing on all various parameters of our stack:</p>\n<ul>\n<li>memory requests</li>\n<li>memory limits</li>\n<li>cpu limits</li>\n<li>Java Xmx</li>\n<li>max active instances</li>\n</ul>\n<p>It quickly became clear that the main factor in our problem was that the memory request was too low.</p>\n<p>The official <a href=\"https://puppet.com/docs/puppetserver/latest/tuning_guide.html#%20jvm-heap-size\">Puppet documentation</a> gives a rule of thumb for tuning the Puppetserver memory. It indicates that each active instance requires 512MB, but another 512MB should be provided for non-heap needs:</p>\n<blockquote>\n<p>You’ll also want to allocate a little extra heap to be used by the rest of the things going on in Puppet Server: the web server, etc. So, a good rule of thumb might be 512MB + (max-active-instances * 512MB).</p>\n</blockquote>\n<p>Our graphs clearly showed that the non-heap memory of the instances stabilized a little bit over 512MB (around 550MB as I'm writing this).</p>\n<p>Since we requested 4 JRuby instances, we should ensure at least (4+1)*512MB of RAM, so 2.5GB. And while our limit was set to 4GB, the requests were only set to 2GB. Changing the requested memory to a higher value showed that this was what was making our servers misbehave.</p>\n<h1>Further tuning</h1>\n<h2>CPU limit</h2>\n<p>We originally set the containers to a CPU limit of 3 because or compute nodes have 4 CPUs and we wanted to leave one free.</p>\n<p>We actually noticed that Puppetserver was using closer to 2.5 CPUs as a mean. So we set the limit to 4 and saw that the Puppetserver seemed to use even less CPU, down to a mean of 2.</p>\n<p>Note that limiting CPUs is necessary when running Java in containers, otherwise Java believes it runs on a single CPU.</p>\n<h2>Max JRuby Instances</h2>\n<p>The number of JRubies recommended changed between Puppet 5 and 6, as stated in the documentation. Up to Puppet 4, it was recommended to set it to\n<code>num-cpus + 2</code>\n, but the docs now state:</p>\n<blockquote>\n<p>As of Puppet Server 1.0.8 and 2.1.0, if you don’t provide an explicit value for this setting, we’ll default to num-cpus - 1, with a minimum value of 1 and a maximum value of 4.</p>\n</blockquote>\n<p>We ran load tests with different values of max JRuby instances and found that\n<code>num-cpus - 1</code>\nwas indeed the best good value.</p>\n<p>Most importantly, we found that setting up the max JRuby instances higher than the number of CPUs made compilation time quite slower (supposedly as it would increase context-switch between the instances).</p>\n<h2>Final settings</h2>\n<p>Following the guidelines and our tests, we ended up with:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">max_active_instances: &#39;3&#39;   # since we have 4 CPUs\njava_xmx: &#39;2g&#39;              # (3+1)*512MB\n\nrequests:\n  cpu: 10m\n  memory: 3Gi               # 1.5*XmX\nlimits:\n  cpu: 4                    # Use all available CPUs as limit\n  memory: 3.3Gi             # 1.1*request just in case</code>\n        </deckgo-highlight-code>\n<p>On this graph from the last 2 days, we can clearly see the situations before production time, during crisis, and after proper tuning:</p>\n<p><img src=\"https://dev-to-uploads.s3.amazonaws.com/i/t2lnmj23y7z3cvgo0b1t.png\" alt=\"JVM Metrics evolution\"></p>\n<h1>Kubernetes-related issues</h1>\n<p>So our Puppetserver was back under control, with pretty similar memory settings to what Puppet 5 used.</p>\n<p>Keeping the two pods we had been running, with their new tuned settings, we ran stress tests by setting the number of parallel threads in the Catalog Diff run.</p>\n<p>When running 12 threads in parallel (far more than what 2 pods with 3 JRubies each can take), we noticed something I had seen before but not understood:</p>\n<deckgo-highlight-code css   highlight-lines=\"\">\n          <code slot=\"code\">Failed to retrieve catalog for foo.example.com from puppetserver in environment catalog_diff: Failed to open TCP connection to puppetserver:8140 (No route to host - connect(2) for &quot;puppetserver&quot; port 8140)</code>\n        </deckgo-highlight-code>\n<p>I had initially thought this was what happened by the Puppetserver was too busy and started rejecting connections. But no, this was clearly another problem, linked to Kubernetes networking and readiness probes.</p>\n<p>As we kept an eye on the Pods readiness during the run, we noticed the pods were going on and off, and thus being taken out of the Kubernetes service on a regular basis. The TCP connection issues happened when both pods were taken out of the service at the same time, since the service ended up with no endpoints left.</p>\n<p>So we turned to the pod's readiness probe to tune it.</p>\n<p>This is the default readiness probe we use in our Helm chart:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">readinessProbe:\n  httpGet:\n    path: /production/status/test\n    port: http\n    scheme: HTTPS\n    httpHeaders:\n      - name: Accept\n        value: pson\n  initialDelaySeconds: 30</code>\n        </deckgo-highlight-code>\n<p>The initial delay lets the Puppetserver start its first JRuby instance before sending a probe. The default in Kubernetes would be\n<code>0</code>\notherwise, which would clearly fail for a Puppetserver.</p>\n<p>For all other settings, we relied on the defaults. As per <a href=\"https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#%20configure-probes\">Kubernetes docs</a>:</p>\n<blockquote>\n<ul>\n<li>initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.</li>\n<li>periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.</li>\n<li>timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.</li>\n<li>successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1.</li>\n<li>failureThreshold: When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1.</li>\n</ul>\n</blockquote>\n<p>The\n<code>timeoutSeconds</code>\nparameter looks very low at 1 second by default. Indeed, we know that a busy Puppetserver could take more than 1 second to respond and that would be perfectly acceptable. So we've set it to\n<code>5</code>\ninstead and the service has been much more stable since.</p>\n<p>We've also set\n<code>failureThreshold</code>\nto\n<code>5</code>\n.</p>\n<h1>Conclusion</h1>\n<p>A small fine-tuning goes a long way!</p>\n<p>Be sure to gather enough data about your Puppetserver so you have the tools to debug its behavior when you need.</p>\n<p>Do you have Puppet, Kubernetes, or observability needs? You can <a href=\"https://www.camptocamp.com/contact/\">contact  us</a> and we'll be happy to put our expertise at your service!</p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/taming-puppetserver-6-a-grafana-story-3c4f\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/templating-puppet-control-repositories-3pk7/","relativePath":"posts/templating-puppet-control-repositories-3pk7.md","relativeDir":"posts","base":"templating-puppet-control-repositories-3pk7.md","name":"templating-puppet-control-repositories-3pk7","frontmatter":{"title":"Templating Puppet Control Repositories","stackbit_url_path":"posts/templating-puppet-control-repositories-3pk7","date":"2020-07-21T08:45:28.730Z","excerpt":"When managing multiple Puppet Control Repositories, modulesync is a very useful tool to keep files in sync.","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--X2kztkXc--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://www.camptocamp.com/wp-content/uploads/xformations_puppet1-720x400.png.pagespeed.ic.UU2oY1Zlj8.webp","comments_count":3,"positive_reactions_count":1,"tags":["puppet","devops","cfgmgmt","tutorial"],"canonical_url":"https://dev.to/camptocamp-ops/templating-puppet-control-repositories-3pk7","template":"post"},"html":"<p>Puppet code is usually deployed using a <a href=\"https://github.com/puppetlabs/control-repo\">Control Repository</a>, a single Git repository used by R10k (or Code Manager on Puppet Enterprise) to manage Puppet environments on Puppet Masters.</p>\n<h1>Why multiple Control Repositories?</h1>\n<p>On complex infrastructures with multiple independent Puppet Masters, you might have the need to use multiple control repositories. For example at Camptocamp, we have specific clients with enough nodes to have their own Puppet infrastructure each.</p>\n<p>For these clients, we do not want to use a shared Puppet Control Repository. However, we do want to keep the code as similar as possible between the infrastructures, and make sure some parameters and settings (admin accounts, ssh keys, etc.) are synchronized.</p>\n<h1>Modulesync to the rescue</h1>\n<p>Modulesync is a piece of software initially created by Puppet Inc. to synchronize files between Git repositories for Puppet modules. Nowadays, this feature is being served by PDK for Puppet modules, so modulesync is now <a href=\"https://github.com/voxpupuli/modulesync/\">managed by the Vox Pupuli community</a>.</p>\n<p>For years, we have been using it at Camptocamp to keep our Control Repositories synchronized.</p>\n<p>In order to achieve this, we use a template repository, which we call\n<code>puppetmaster-common</code>\n. </p>\n<p>Each of our clients has their own GitLab instance with their Puppet Control Repository, and this template repository brings it all together. </p>\n<p>This repository is set as follows.</p>\n<h2>modulesync.yml</h2>\n<p>This file contains the general settings for modulesync:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">---\n# default namespace in GitLab instances\nnamespace: &#39;camptocamp&#39;\n# Branch to synchronize\nbranch: &#39;msync&#39;\n# Default Merge Request title\npr_title: &#39;Modulesync [autodiff]&#39;\n# Default Merge Request target branch\npr_target_branch: &#39;staging&#39;</code>\n        </deckgo-highlight-code>\n<p>On all our Control Repositories, we have locked the\n<code>stable</code>\nand\n<code>staging</code>\nbranches to prevent pushes to them. This forces us to create Merge Requests for new features, ensuring quality <a href=\"https://dev.to/camptocamp-ops/automated-puppet-impact-analysis-1c1\">through our CI pipeline</a>.</p>\n<p>For this reason, we use a separate branch, called\n<code>msync</code>\n, to perform the synchronizations.</p>\n<h2>managed_modules.yml</h2>\n<p>Since we use several GitLab instances and we want to be able to automate Merge Request creation, this file contains GitLab API URLs and tokens per managed Control Repository. It looks similar to this:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">puppetmaster-c2c:\n  :remote: &#39;ssh://git@gitlab1/camptocamp/is/puppet/puppetmaster-c2c.git&#39;\n  :namespace: &#39;camptocamp/is/puppet&#39;\n  :gitlab:\n    :token: &#39;abc123def456&#39;\n    :base_url: &#39;https://gitlab1/api/v4&#39;\n\npuppetmaster-client1:\n  :remote: &#39;ssh://git@gitlab-client1/puppet/puppetmaster-client1.git&#39;\n  :namespace: &#39;puppet&#39;\n  :gitlab:\n    :token: &#39;someOtherToken&#39;\n    :base_url: &#39;https://gitlab-client1/api/v4&#39;</code>\n        </deckgo-highlight-code>\n<h2>moduleroot</h2>\n<p>The\n<code>moduleroot</code>\ndirectory contains the files we want to synchronize, as ERB templates. In our case:</p>\n<deckgo-highlight-code    highlight-lines=\"undefined\">\n          <code slot=\"code\">moduleroot/\n├── doc\n│   ├── architecture.md.erb\n│   └── before_after.md.erb\n├── environment.conf.erb\n├── Gemfile.erb\n├── .gitignore.erb\n├── .gitlab-ci.yml.erb\n├── hieradata\n│   └── cross-site\n│       ├── common-cross-site.yaml.erb\n│       ├── README.md.erb\n│       ├── .travis.yml.erb\n│       └── verify-key-length.erb\n├── hiera-eyaml-gpg.recipients.erb\n├── Puppetfile.erb\n├── .puppet-lint.rc.erb\n├── Rakefile.erb\n├── README.md.erb\n└── scripts\n    ├── bolt.erb\n    ├── docker.erb\n    ├── node_deactivate.erb\n    ├── puppetca.erb\n    └── puppet-query.erb</code>\n        </deckgo-highlight-code>\n<p>A few notes here on the files here.</p>\n<h3>Static files</h3>\n<p>Most of these files (e.g. the scripts,\n<code>Gemfile</code>\n, or\n<code>environment.conf</code>\n) are actually static, but they need to be named\n<code>.erb</code>\nnonetheless, otherwise\n<code>modulesync</code>\nwill ignore them.</p>\n<h3>hiera-eyaml-gpg.recipients</h3>\n<p><code>hiera-eyaml-gpg.recipients.erb</code>\nworks essentially as a filter on the\n<code>hiera-eyaml-gpg.recipients</code>\nfile at the top of the repository, taking every admin key, as well as one\n<code>puppet@</code>\nkey specified in the\n<code>.sync.yml</code>\nof the control repository with the\n<code>master_gpg_key</code>\nsetting:</p>\n<deckgo-highlight-code erb   highlight-lines=\"\">\n          <code slot=\"code\">&lt;%=\nbasedir = File.expand_path(&#39;..&#39;, File.dirname(__FILE__))\nrecipients_file = File.expand_path(File.join(basedir, &#39;hiera-eyaml-gpg.recipients&#39;))\n\nFile.readlines(recipients_file).map { |l|\n  r = l.strip\n  if r =~ /^puppet@/\n    r if @configs[&#39;master_gpg_key&#39;] == r\n  else\n    r\n  end\n}.compact.join(&quot;\\n&quot;)\n%&gt;</code>\n        </deckgo-highlight-code>\n<h3>Puppetfile</h3>\n<p>Similar to\n<code>hiera-eyaml-gpg.recipients</code>\n,\n<code>Puppetfile</code>\nis managed as a filter. We keep a full\n<code>Puppetfile</code>\nat the top of the repository, with all the modules we use on all Puppet Infrastructures, and the default versions we want. Then each Control Repository can pick which module to include and optionally override versions.</p>\n<p>The\n<code>Puppetfile.erb</code>\ntemplate uses Augeas to cleanly filter and rewrite the target\n<code>Puppetfile</code>\n:</p>\n<deckgo-highlight-code erb   highlight-lines=\"\">\n          <code slot=\"code\">############################################### \n# This file is managed in puppetmaster-common # \n# Do not edit locally                         # \n############################################### \n\n&lt;%= require &#39;augeas&#39;\nbasedir = File.expand_path(&#39;..&#39;, File.dirname(__FILE__))\nbase_pf = File.expand_path(File.join(basedir, &#39;Puppetfile&#39;))\nbase_pf_content = File.read(base_pf)\nlens_dir = File.expand_path(File.join(basedir, &#39;lenses&#39;))\n\ndef mod_regexp(name)\n  &quot;*[label()!=&#39;# comment&#39; and .=~regexp(&#39;([^/-]+[/-])?# {name}&#39;)]&quot;\nend\n\nAugeas.open(nil, lens_dir, Augeas::NO_MODL_AUTOLOAD) do |aug|\n  aug.set(&#39;/input&#39;, base_pf_content)\n  unless aug.text_store(&#39;Puppetfile.lns&#39;, &#39;/input&#39;, &#39;/parsed&#39;)\n      msg = aug.get(&#39;/augeas//error&#39;)\n      fail &quot;Failed to parse common Puppetfile: # {msg}&quot;\n  end\n  aug.set(&#39;/augeas/context&#39;, &#39;/parsed&#39;)\n  all_modules = aug.match(&#39;*[label()!=&quot;# comment&quot;]&#39;).map { |m| aug.get(m).split(%r{[/-]}).last }\n\n  whitelist = @configs[&#39;modules&#39;].keys if @configs[&#39;modules&#39;]\n  not_in_all = whitelist - all_modules if whitelist\n  fail &quot;Module(s) # {not_in_all.join(&#39;, &#39;)} not found in common Puppetfile&quot; if not_in_all and !not_in_all.empty?\n\n  # Remove unnecessary modules\n  (all_modules - whitelist).each do |m|\n    aug.rm(mod_regexp(m))\n  end if whitelist\n\n  # Amend\n  modified = @configs[&#39;modules&#39;].reject { |m, v| v.nil? } if @configs[&#39;modules&#39;]\n  modified.each do |m, c|\n    aug.set(mod_regexp(m), &quot;# {c[&#39;user&#39;]}/# {m}&quot;) if c[&#39;user&#39;]\n    if c[&#39;version&#39;]\n      aug.rm(&quot;# {mod_regexp(m)}/git&quot;)\n      aug.rm(&quot;# {mod_regexp(m)}/ref&quot;)\n      aug.set(&quot;# {mod_regexp(m)}/@version&quot;, c[&#39;version&#39;])\n    else\n      aug.rm(&quot;# {mod_regexp(m)}/@version&quot;)\n      aug.set(&quot;# {mod_regexp(m)}/git&quot;, c[&#39;git&#39;]) if c[&#39;git&#39;]\n      aug.set(&quot;# {mod_regexp(m)}/ref&quot;, c[&#39;ref&#39;]) if c[&#39;ref&#39;]\n    end\n  end if modified\n\n  aug.text_retrieve(&#39;Puppetfile.lns&#39;, &#39;/input&#39;, &#39;/parsed&#39;, &#39;/output&#39;)\n  unless aug.match(&#39;/augeas/text/parsed/error&#39;).empty?\n    fail &quot;Failed to generate Puppetfile: # {aug.get(&#39;/augeas/text/parsed/error&#39;)}\n  # {aug.get(&#39;/augeas/text/parsed/error/message&#39;)}&quot;\n  end\n  aug.get(&#39;/output&#39;)\nend -%&gt;</code>\n        </deckgo-highlight-code>\n<h3>.gitlab-ci.yml.erb</h3>\n<p>This file defines the CI/CD pipelines for our Control Repositories, extending our <a href=\"https://github.com/camptocamp/puppet-gitlabci-pipelines\">generic Puppet pipelines rules</a>. It takes variables to control catalog-diff.</p>\n<h3>cross-site hieradata</h3>\n<p>The cross-site hieradata level contains common system accounts with their UID, shell &#x26; SSH key. We then use <a href=\"https://forge.puppet.com/camptocamp/accounts\">our accounts module</a> to deploy these accounts. </p>\n<h1>Sample .sync.yml</h1>\n<p>Each Control Repository features a\n<code>.sync.yml</code>\nfile to provide overrides for variables. Here's an example:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">---\nRakefile:\n  master_gpg_key: &#39;puppet@client1&#39;\n.gitlab-ci.yml:\n  puppetdb_urls: &#39;https://puppetdb.client1.ch&#39;\n  puppet_server: &#39;puppet.client1.ch&#39;\n  puppetdiff_url: &#39;https://puppetdiff.client1.ch&#39;\nPuppetfile:\n  modules:\n    # include accounts module, with default version\n    accounts:\n    # include letsencrypt module, override version\n    letsencrypt:\n      git: &#39;https://github.com/saimonn/puppet-letsencrypt&#39;\n      ref: &#39;default_cert_name&#39;</code>\n        </deckgo-highlight-code>\n<h1>Usage</h1>\n<p>Since\n<code>managed_modules.yml</code>\ncontains secret tokens for the various GitLabs, we don't want to commit it to the Git repository. Instead, the content of this file is stored in <a href=\"https://github.com/gopasspw/gopass\">\n<code>gopass</code>\n</a> and retrieved dynamically with <a href=\"https://github.com/cyberark/summon\">\n<code>summon</code>\n</a>.</p>\n<p>In order to use\n<code>summon</code>\n, we have a local\n<code>secrets.yml</code>\npointing to the location of the\n<code>managed_modules.yml</code>\nfile in\n<code>gopass</code>\n:</p>\n<deckgo-highlight-code yaml   highlight-lines=\"\">\n          <code slot=\"code\">---\nMSYNC_MANAGED_MODULES: !var:file puppet/msync/managed_modules</code>\n        </deckgo-highlight-code>\n<p>and use a\n<code>msync_update</code>\nwrapper to launch\n<code>modulesync</code>\n:</p>\n<deckgo-highlight-code bash   highlight-lines=\"\">\n          <code slot=\"code\"># !/bin/bash\n\nbundle exec msync update --managed_modules_conf=$MSYNC_MANAGED_MODULES &quot;$@&quot;</code>\n        </deckgo-highlight-code>\n<p>This then allows to test the changes with:</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ summon ./msync_update -m &quot;Update module foo&quot; --noop</code>\n        </deckgo-highlight-code>\n<p>and then deploy on a single site (or all without the filter):</p>\n<deckgo-highlight-code shell   highlight-lines=\"\">\n          <code slot=\"code\">$ summon ./msync_update -m &quot;Update module foo&quot; -f c2c --pr</code>\n        </deckgo-highlight-code>\n<p><em>Do you have specific Puppet needs? <a href=\"https://www.camptocamp.com/contact\">Contact us</a>, we can help you!</em></p>\n<p><em><a href=\"https://dev.to/camptocamp-ops/templating-puppet-control-repositories-3pk7\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "},{"url":"/posts/visibility-comments-b65/","relativePath":"posts/visibility-comments-b65.md","relativeDir":"posts","base":"visibility-comments-b65.md","name":"visibility-comments-b65","frontmatter":{"title":"How to encourage interaction on dev.to posts?","stackbit_url_path":"posts/visibility-comments-b65","date":"2020-06-10T19:19:35.493Z","excerpt":"After a few years of being inactive on dev.to, I've started actively posting about a month ago.  I se...","thumb_img_path":"https://res.cloudinary.com/practicaldev/image/fetch/s--BobVuS2v--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/w14l9yddz1nqrnvo3xfe.jpg","comments_count":0,"positive_reactions_count":1,"tags":["discuss","writing","beginners","question"],"canonical_url":"https://dev.to/raphink/visibility-comments-b65","template":"post"},"html":"<p>After a few years of being inactive on dev.to, I've started actively posting about a month ago.</p>\n<p>I see quite a few visits to some of my posts (up to nearly 400 views for some), but I get absolutely no comments (unless it's a small post with the\n<code># discuss</code>\ntag).</p>\n<p>Is that normal? Is there a way on dev.to to drive more reactions/interactions? Is it the subject (mainly DevOps/SRE-related) that is not core to this platform?</p>\n<p>Am I missing something?</p>\n<p><em><a href=\"https://dev.to/raphink/visibility-comments-b65\">This post is also available on DEV.</a></em></p>\n<script>\nconst parent = document.getElementsByTagName('head')[0];\nconst script = document.createElement('script');\nscript.type = 'text/javascript';\nscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.1.1/iframeResizer.min.js';\nscript.charset = 'utf-8';\nscript.onload = function() {\n    window.iFrameResize({}, '.liquidTag');\n};\nparent.appendChild(script);\n</script>    "}],"site":{"siteMetadata":{"description":"A minimal blogging theme for Stackbit","header":{"title":"Raphaël Pinson","tagline":"Infrastructure Developer working @camptocamp. ","background_img":"images/header-bg.jpg","has_nav":true,"nav_links":[{"label":"Home","url":"https://raphink.info","type":"link"},{"label":"Contact","url":"/contact/","type":"link"}],"has_social":true,"social_links":[{"label":"GitHub","url":"https://github.com/raphink","type":"icon","icon_class":"fa-github","new_window":true},{"label":"DEV","url":"https://dev.to/raphink","type":"icon","icon_class":"fa-dev","new_window":true},{"label":"Twitter","url":"https://twitter.com/raphink","type":"icon","icon_class":"fa-twitter","new_window":true},{"label":"LinkedIn","url":"https://www.linkedin.com/in/raphink","type":"icon","icon_class":"fa-linkedin","new_window":true},{"label":"Stack Exchange","url":"https://stackexchange.com/users/82664/%e2%84%9daphink","type":"icon","icon_class":"fa-stack-exchange","new_window":true}]},"footer":{"content":"&copy; All rights reserved.","links":[{"label":"Made with Stackbit.","url":"https://www.stackbit.com","type":"link","new_window":true},{"label":"Generated from DEV","url":"https://dev.to/connecting-with-stackbit","new_window":true,"type":"link"}]},"palette":"yellow","title":"Open Source Automation"},"pathPrefix":"","data":{"data":{"author":{"name":"Raphaël Pinson","avatar":"https://res.cloudinary.com/practicaldev/image/fetch/s--BG4LRVnz--/c_fill,f_auto,fl_progressive,h_640,q_auto,w_640/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/59811/cdcdbc95-1306-4455-9f79-fa032c300206.jpeg"},"social":{"devto":{"username":"raphink"},"github":{"username":"raphink"}}}}},"menus":{}}}