Re: Statistics queries



Il giorno lun 19 ago 2019 alle ore 15:52 Philip Withnall
<philip tecnocode co uk> ha scritto:

Hey,

On Mon, 19 Aug 2019, at 15:45, Andrea Veri wrote:
Philip,

replies in line :)

Thanks!

Il giorno ven 16 ago 2019 alle ore 12:55 Philip Withnall
<philip tecnocode co uk> ha scritto:
Particularly, I would like to know:
 • The bandwidth our services used each year, for the last 3 years. I
assume this would be dominated by website requests (www.gnome.org and
GitLab) and tarball downloads.

We share the hosting with other corporate services so we unfortunately
won't be able to provide bandwidth statistics.

OK. Would you be able to estimate a rough upper and lower bound? I have no idea even what order of 
magnitude we're likely using.

Unfortunately not, all the devices in use are shared (and thus
bandwidth stacked) with other community tenants which means there's no
easy way to fetch GNOME's usage.

 • The number of dedicated machines we have running (for hosting
services or for running CI) and their average load factors over the
last (for example) week.

The total number of bare metals we have running is:

8 hypervisors
7 CI runners

Thanks.

Load is usually pretty low (6-8 on load15) on hypervisors, regarding
runners we're currently not gathering any utilization statistic but I
expect the load to considerably higher on peek hours when multiple
concurrent jobs are running.

OK. Is it worth adding utilisation gathering to the runners?

The only purpose I can foresee would be diagnostics in case of
frequent failures or the like but we have monitoring in place to
figure out whether a runner has become unhealthy and investigate as
soon as a page is created (troubleshooting is usually very
straightforward and over the time we corrected all the issues that
came up and made sure all the runners contain fixes to prevent those
to happen again). The runners don't run any service other than the
gitlab runners themselves and the stats we're interested in (numbers
of builds, failures etc.) are already available via the gitlab
prometheus/grafana metrics platform.

 • Whether those servers run on renewable energy or, if not, which
country and region they are each physically hosted in (so I can
calculate carbon intensity of energy supply for them).

The datacenters in question are located in: Raleigh, North Carolina,
Phoenix, Arizona and one of them lives in London.

So none of them explicitly run on renewables?

I'm honestly not aware whether these datacenters run on renewables or not.

 • For the AWS cloud services we use, which regions they are
provisioned in (since that impacts their carbon intensity, see [1]);
and the equivalent (if known) for any other cloud services we use.

What we mainly use within AWS are S3 buckets in the following 2 regions:

us-east-1
us-west-2

How many buckets do we use on average?

Pretty low, we're currently making use of 3 buckets in total :)

--
Cheers,

Andrea

Red Hatter,
Fedora / EPEL packager,
GNOME Infrastructure Team Coordinator,
Former GNOME Foundation Board of Directors Secretary,
GNOME Foundation Membership & Elections Committee Chairman

Homepage: https://www.gnome.org/~av


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]