Title here
Summary here
Filter an array of strings
cat pp.json | jq '.p_cfg[] | . as $network | .networks[] | select(contains("192.168.100")) | $network.upstreams'
Get key value out of consul kv export file
cat data.json| jq -r '.[] | select(.Key=="key/path/here") | .Value' | base64 --decode | zstd -d
Or better yet, if you want to get multiple keys by prefix:
cat data.json| jq -r '.[] | select(.Key | startswith("key/path/here")) | .Value' | base64 --decode | zstd -d
Get average timings per serverIPaddress. Good way to understand performance by upstream IP / protocol.
Could also include other columns in there. See what else is available in the HAR. Note that filtering out non 200 responses is also an option.
1jq -r '.log.entries[] | select(.serverIPAddress != "") | [.serverIPAddress, .time] | @tsv' Sao_Paulo.har | \
2python3 -c '
3import sys
4from collections import defaultdict
5
6timings = defaultdict(list)
7for line in sys.stdin:
8 ip, val = line.strip().split("\t")
9 timings[ip].append(float(val))
10
11for ip, timings in sorted(timings.items(), reverse=True, key=lambda timing: sum(timing[1]) / len(timing[1])):
12 avg = sum(timings) / len(timings)
13 print(f"{ip}\t{avg:.6f}")
14' | column -t
Get the number of requests per upstream IP. Good for understanding if the averages above are a fluke (though you might want to consider baking some kind of lower threshold into the above query instead - or outputting it as a third column.)
1jq -r '.log.entries[] | select(.serverIPAddress != "") | .serverIPAddress' Sao_Paulo.har | sort | uniq -c