Debugging the Charmed Operator Framework

This section will guide you through the process of debugging your charm if it isn’t working as intended, or you’d just like some tools above and beyond printf debugging with logging to juju debug-log.

Contents:

The debug-log command

Running juju debug-log provides a consolidated view of all the Juju messages from Juju and charm logs at the same time. The logs can show you detailed inner workings of Juju and any juju-log messages that are run from the charm code.

In the case of Charmed Operators, the Operator Framework sets up a Python logging.Handler which forwards all of your logging messages to juju-log, so you’ll see those also.

By default logging is set to the INFO level for newly-created models, so if you have debug messages logging in your code, you’ll need to run this to see them:

juju model-config logging-config="<root>=INFO;unit=DEBUG"

Alternatively, you can do this for every model created by a controller, or on a per-controller basis by substituting for the name of your controller in the following command:

juju model-defaults <controller-name> logging-config='<root>=INFO; unit=DEBUG'

See the Juju logs documentation for details about what you can do with logs: replay them, filter them, send to a remote target, check audit logs, and more.

Viewing relation data with show-unit

When a relation between charms is established, Juju offers several ways to view the state of the relation. If more many relations have been established, you may want to very explicit about the relation and data you are querying. In other cases, you may want to simply see all the relation data a given unit has access to.

If you want to be specific, you can get a list of which relations are established on a specified interface, then query a specific relation to find the application on the other side, and finally get the data for that relation.

$ juju run --unit your-charm/0 "relation-ids foo"
foo:123
$  juju run --unit your-charm/0 "relation-list -r bar:foo"
other-charm/0
$ juju run --unit nova-compute/0 "relation-get -r foo:30 - other-charm/0"
hostname: 1.2.3.4
password: passw0rd
private-address: 2.3.45.
somekey: somedata

In other cases, it may be preferable to simply interrogate a unit for all of the data it can see. Note: this command returns significantly faster, as there is less return time than Juju dispatching a command with juju run .. and waiting for the result. For example:

# juju show-unit grafana/0
grafana/0:
  opened-ports: []
  charm: local:focal/grafana-k8s-20
  leader: true
  relation-info:
  - endpoint: grafana-peers
    related-endpoint: grafana-peers
    application-data: {}
    local-unit:
      in-scope: true
      data:
        egress-subnets: 10.152.183.202/32
        ingress-address: 10.152.183.202
        private-address: 10.152.183.202
  - endpoint: grafana-source
    related-endpoint: grafana-source
    application-data:
      grafana_source_data: '{"model": "lma", "model_uuid": "c80e14c0-39c0-41b1-8c2b-c9d92abbc2ed",
        "application": "prometheus", "type": "prometheus"}'
    related-units:
      prometheus/0:
        in-scope: true
        data:
          egress-subnets: 10.152.183.175/32
          grafana_source_host: 10.1.48.94:9090
          ingress-address: 10.152.183.175
          private-address: 10.152.183.175
  provider-id: grafana-0
  address: 10.1.48.70

Log file location

Since log messages stream in real time, it is possible to miss messages while using the debug-log command. If you need to view log entries from before you ran the juju debug-log command, you can pass the --replay option.

Alternatively, you can SSH to the machine and view the log files. To access the individual machine use juju ssh <machine-number> to get access to the machine. If you’re using the Charmed Operator Framework with MicroK8s, you can SSH to a unit with juju ssh your-charm/0 for the first unit, juju ssh your-charm/1 for the second unit, and so on.

The Juju log files can be found in the /var/log/juju directory for machine charms.

Log files on the controller

The machine running the controller is not represented in the Juju model and therefore not accessible by machine number. If you need the log files from the controller, you have a few options. More directly, you can change to the controller context in Juju and SSH by number:

juju switch controller
juju ssh 0

Alternatively, if you’re using the Charmed Operator Framework with MicroK8s, you can SSH to the controller with juju ssh -m controller 0, assuming you followed the Juju MicroK8s bootstrapping guide.

The Juju debug-code command

If you’re using the Charmed Operator Framework, you can jump into live debugging using pdb without needing to change the charm code at all. Simply running the following command:

juju debug-code --at=hook <unit>

This will make the Operator framework automatically interrupt the running charm at the beginning of the registered callback method(s) for any events and/or actions. A tmux window will be opened, and it will wait status until a hook or callback executes.

When that happens, it will place you into an interactive debugging session with pdb.

Example output:

2021-06-18 12:50:25,494 DEBUG    Operator Framework 1.2.0 up and running.
2021-06-18 12:50:25,503 DEBUG    Legacy hooks/config-changed does not exist.
2021-06-18 12:50:25,538 DEBUG    Emitting Juju event config_changed.

Starting pdb to debug charm operator.
Run `h` for help, `c` to continue, or `exit`/CTRL-d to abort.
Future breakpoints may interrupt execution again.
More details at https://discourse.jujucharms.com/t/debugging-charm-hooks

> /var/lib/juju/agents/unit-content-cache-k8s-0/charm/src/charm.py(63)_on_config_changed()
-> msg = 'Configuring workload container (config-changed)'
(Pdb) n
> /var/lib/juju/agents/unit-content-cache-k8s-0/charm/src/charm.py(64)_on_config_changed()
-> logger.info(msg)
(Pdb) n
2021-06-18 12:50:32,831 INFO     Configuring workload container (config-changed)
> /var/lib/juju/agents/unit-content-cache-k8s-0/charm/src/charm.py(65)_on_config_changed()
-> self.model.unit.status = MaintenanceStatus(msg)
(Pdb) self.model.unit
<ops.model.Unit content-cache-k8s/0>
(Pdb) self.model.unit.status
ActiveStatus('Ready')
(Pdb) n
> /var/lib/juju/agents/unit-content-cache-k8s-0/charm/src/charm.py(66)_on_config_changed()
-> self.configure_workload_container(event)
(Pdb) self.model.unit.status
MaintenanceStatus('Configuring workload container (config-changed)')
(Pdb) n
2021-06-18 12:50:47,213 INFO     Assembling k8s ingress config
2021-06-18 12:50:47,305 INFO     Assembling environment configs
2021-06-18 12:50:47,356 INFO     Assembling pebble layer config
2021-06-18 12:50:47,380 INFO     Assembling Nginx config
2021-06-18 12:50:47,414 INFO     Updating Nginx site config
2021-06-18 12:50:47,484 INFO     Updating pebble layer config
2021-06-18 12:50:47,533 INFO     Stopping content-cache
2021-06-18 12:50:47,922 INFO     Starting content-cache
2021-06-18 12:50:49,018 INFO     Ready
--Return--
> /var/lib/juju/agents/unit-content-cache-k8s-0/charm/src/charm.py(66)_on_config_changed()->None
-> self.configure_workload_container(event)

Typing n at this point (execute the next command) will run the final line of the config-changed handler, and then end the pdb session.

As you can see, during this process we were able to inspect Operator Framework primitives directly at runtime, such as self.model.unit and self.model.unit.status. For more information about what you can do with pdb, see the documentation.

You can also pass the name of the event or action if you only want to debug or inspect a specific event or action:

juju debug-code --at=hook <unit> config-changed

This will interrupt your running charm at the beginning of the handler for the config-changed event, which might be defined in your code as follows:

self.framework.observe(self.on.config_changed, self._on_config_changed)

If you prefer to set a specific breakpoint at a particular line of code in your charm, you can add this at the relevant place:

self.framework.breakpoint()

Then simply run juju debug-code <unit>, and your pdb session would begin whenever the above line is reached. No need to specify --at=

Considerations

While you’re debugging one unit, execution of all hooks on that machine or related to that charm is blocked, since Juju locks the model until the hook is resolved.

This is generally helpful, because you don’t want to have to contend with concurrent changes to the runtime environment while you’re debugging, but you should be aware that multiple debug-code sessions for units assigned to the same machine will block one another, and that you can’t control relative execution order directly other than by erroring out of hooks you don’t want to run yet, and retrying them later.

Pebble

If your workload is running in Kubernetes or MicroK8s, it’s often useful to be able to inspect the running Pebble plan. To do so, you should juju ssh into the workload container for your charm, and run /charm/bin/pebble plan. Here’s an example:

$ juju ssh --container concourse-worker concourse-worker/0
# /charm/bin/pebble plan
services:
    concourse-worker:
        summary: concourse worker node
        startup: enabled
        override: replace
        command: /usr/local/bin/entrypoint.sh worker
        environment:
            CONCOURSE_BAGGAGECLAIM_DRIVER: overlay
            CONCOURSE_TSA_HOST: 10.1.234.43:2222
            CONCOURSE_TSA_PUBLIC_KEY: /concourse-keys/tsa_host_key.pub
            CONCOURSE_TSA_WORKER_PRIVATE_KEY: /concourse-keys/worker_key
            CONCOURSE_WORK_DIR: /opt/concourse/worker

In some cases, your workload container might not allow you to run things in it, if, for instance, it’s based on a “scratch” image. To get around this, you can run the same command from your charm container with a small modification to point to the correct location for the pebble socket.

$ juju ssh concourse-worker/0
# PEBBLE_SOCKET=/charm/containers/concourse-worker/pebble.socket /charm/bin/pebble plan
services:
    concourse-worker:
        summary: concourse worker node
        startup: enabled
        override: replace
        command: /usr/local/bin/entrypoint.sh worker
        environment:
            CONCOURSE_BAGGAGECLAIM_DRIVER: overlay
            CONCOURSE_TSA_HOST: 10.1.234.43:2222
            CONCOURSE_TSA_PUBLIC_KEY: /concourse-keys/tsa_host_key.pub
            CONCOURSE_TSA_WORKER_PRIVATE_KEY: /concourse-keys/worker_key
            CONCOURSE_WORK_DIR: /opt/concourse/worker

Last updated 4 months ago.