Title here
Summary here
Filter an array of strings
cat pp.json | jq '.p_cfg[] | . as $network | .networks[] | select(contains("192.168.100")) | $network.upstreams'
Get key value out of consul kv export file
cat data.json| jq -r '.[] | select(.Key=="key/path/here") | .Value' | base64 --decode | zstd -d
Or better yet, if you want to get multiple keys by prefix:
cat data.json| jq -r '.[] | select(.Key | startswith("key/path/here")) | .Value' | base64 --decode | zstd -d
Get average timings per serverIPaddress. Good way to understand performance by upstream IP / protocol.
Could also include other columns in there. See what else is available in the HAR. Note that filtering out non 200 responses is also an option.
jq -r '.log.entries[] | select(.serverIPAddress != "") | [.serverIPAddress, .time] | @tsv' Sao_Paulo.har | \
python3 -c '
import sys
from collections import defaultdict
timings = defaultdict(list)
for line in sys.stdin:
ip, val = line.strip().split("\t")
timings[ip].append(float(val))
for ip, timings in sorted(timings.items(), reverse=True, key=lambda timing: sum(timing[1]) / len(timing[1])):
avg = sum(timings) / len(timings)
print(f"{ip}\t{avg:.6f}")
' | column -t
Get the number of requests per upstream IP. Good for understanding if the averages above are a fluke (though you might want to consider baking some kind of lower threshold into the above query instead - or outputting it as a third column.)
jq -r '.log.entries[] | select(.serverIPAddress != "") | .serverIPAddress' Sao_Paulo.har | sort | uniq -c