Arber Xhindoli pushed to branch arber/91-get-tree at BuildGrid / buildgrid
Commits:
-
886e3ff4
by Laurence Urhegyi at 2018-11-22T18:18:58Z
-
131c6d87
by Finn at 2018-11-23T09:01:50Z
-
2360d613
by Finn at 2018-11-23T09:01:50Z
-
15e7a095
by Finn at 2018-11-23T09:01:50Z
-
b21c1258
by Finn at 2018-11-23T09:01:50Z
-
7e184bf9
by Finn at 2018-11-23T09:01:50Z
-
38ed83ba
by Finn at 2018-11-23T09:01:50Z
-
ac53c00a
by Arber Xhindoli at 2018-11-23T13:46:30Z
-
bf7b415f
by Arber Xhindoli at 2018-11-23T13:46:30Z
-
448377a6
by Arber Xhindoli at 2018-11-23T13:46:30Z
-
234ccc8b
by Arber Xhindoli at 2018-11-23T13:46:30Z
-
86d68236
by Arber Xhindoli at 2018-11-23T19:58:20Z
-
63c90e87
by Arber Xhindoli at 2018-11-23T20:02:39Z
17 changed files:
- + COMMITTERS.md
- CONTRIBUTING.rst
- − MAINTAINERS
- buildgrid/_app/bots/buildbox.py
- buildgrid/_app/bots/host.py
- buildgrid/_app/commands/cmd_bot.py
- buildgrid/_app/commands/cmd_cas.py
- buildgrid/_app/commands/cmd_execute.py
- buildgrid/_app/commands/cmd_operation.py
- buildgrid/_app/commands/cmd_server.py
- buildgrid/client/cas.py
- buildgrid/server/cas/instance.py
- buildgrid/server/cas/service.py
- buildgrid/settings.py
- tests/cas/data/hello/hello.h → tests/cas/data/hello/hello2/hello.h
- + tests/cas/data/hello/hello3/hello4/hello5/hello.h
- tests/cas/test_client.py
Changes:
1 |
+## COMMITTERS
|
|
2 |
+ |
|
3 |
+| Name | Email |
|
|
4 |
+| -------- | -------- |
|
|
5 |
+| Carter Sande | <carter.sande@duodecima.technology> |
|
|
6 |
+| Ed Baunton | <edbaunton gmail com> |
|
|
7 |
+| Laurence Urhegyi | <laurence urhegyi codethink co uk> |
|
|
8 |
+| Finn Ball | <finn ball codethink co uk> |
|
|
9 |
+| Paul Sherwood | <paul sherwood codethink co uk> |
|
|
10 |
+| James Ennis | <james ennis codethink com> |
|
|
11 |
+| Jim MacArthur | <jim macarthur codethink co uk> |
|
|
12 |
+| Juerg Billeter | <juerg billeter codethink co uk> |
|
|
13 |
+| Martin Blanchard | <martin blanchard codethink co uk> |
|
|
14 |
+| Marios Hadjimichael | <mhadjimichae bloomberg net> |
|
|
15 |
+| Raoul Hidalgo Charman | <raoul hidalgocharman codethink co uk> |
|
|
16 |
+| Rohit Kothur | <rkothur bloomberg net> |
|
... | ... | @@ -32,40 +32,31 @@ side effects and quirks the feature may have introduced. More on this below in |
32 | 32 |
|
33 | 33 |
.. _BuildGrid mailing list: https://lists.buildgrid.build/cgi-bin/mailman/listinfo/buildgrid
|
34 | 34 |
|
35 |
- |
|
36 | 35 |
.. _patch-submissions:
|
37 | 36 |
|
38 | 37 |
Patch submissions
|
39 | 38 |
-----------------
|
40 | 39 |
|
41 |
-We are running `trunk based development`_. The idea behind this is that merge
|
|
42 |
-requests to the trunk will be small and made often, thus making the review and
|
|
43 |
-merge process as fast as possible. We do not want to end up with a huge backlog
|
|
44 |
-of outstanding merge requests. If possible, it is preferred that merge requests
|
|
45 |
-address specific points and clearly outline what problem they are solving.
|
|
46 |
- |
|
47 |
-Branches must be submitted as merge requests (MR) on GitLab and should be
|
|
48 |
-associated with an issue, whenever possible. If it's a small change, we'll
|
|
49 |
-accept an MR without it being associated to an issue, but generally we prefer an
|
|
50 |
-issue to be raised in advance. This is so that we can track the work that is
|
|
40 |
+Branches must be submitted as merge requests (MR) on GitLab and should have a
|
|
41 |
+corresponding issue raised in advance (whenever possible). If it's a small change,
|
|
42 |
+an MR without it being associated to an issue is okay, but generally we prefer an
|
|
43 |
+issue to be raised in advance so that we can track the work that is
|
|
51 | 44 |
currently in progress on the project.
|
52 | 45 |
|
46 |
+When submitting a merge request, please obtain a review from another committer
|
|
47 |
+who is familiar with the area of the code base which the branch effects. An
|
|
48 |
+approval from another committer who is not the patch author will be needed
|
|
49 |
+before any merge (we use Gitlab's 'approval' feature for this).
|
|
50 |
+ |
|
53 | 51 |
Below is a list of good patch submission good practice:
|
54 | 52 |
|
55 | 53 |
- Each commit should address a specific issue number in the commit message. This
|
56 | 54 |
is really important for provenance reasons.
|
57 |
-- Merge requests that are not yet ready for review must be prefixed with the
|
|
58 |
- ``WIP:`` identifier, but if we stick to trunk based development then the
|
|
59 |
- ``WIP:`` identifier will not stay around for very long on a merge request.
|
|
60 |
-- When a merge request is ready for review, please find someone willing to do
|
|
61 |
- the review (ideally a maintainer) and assign them the MR, leaving a comment
|
|
62 |
- asking for their review.
|
|
55 |
+- Merge requests that are not yet ready for review should be prefixed with the
|
|
56 |
+ ``WIP:`` identifier.
|
|
63 | 57 |
- Submitted branches should not contain a history of work done.
|
64 | 58 |
- Unit tests should be a separate commit.
|
65 | 59 |
|
66 |
-.. _trunk based development: https://trunkbaseddevelopment.com
|
|
67 |
- |
|
68 |
- |
|
69 | 60 |
Commit messages
|
70 | 61 |
~~~~~~~~~~~~~~~
|
71 | 62 |
|
... | ... | @@ -89,6 +80,57 @@ For more tips, please read `The seven rules of a great Git commit message`_. |
89 | 80 |
|
90 | 81 |
.. _The seven rules of a great Git commit message: https://chris.beams.io/posts/git-commit/#seven-rules
|
91 | 82 |
|
83 |
+.. _committer-access:
|
|
84 |
+ |
|
85 |
+Committer access
|
|
86 |
+----------------
|
|
87 |
+ |
|
88 |
+Committers in the BuildGrid project are those folks to whom the right to
|
|
89 |
+directly commit changes to our version controlled resources has been granted.
|
|
90 |
+While every contribution is
|
|
91 |
+valued regardless of its source, not every person who contributes code to the
|
|
92 |
+project will earn commit access. The `COMMITTERS`_ file lists all committers.
|
|
93 |
+ |
|
94 |
+.. _COMMITTERS: https://gitlab.com/BuildGrid/buildgrid/blob/master/COMMITTERS.md
|
|
95 |
+.. _Subversion: http://subversion.apache.org/docs/community-guide/roles.html#committers
|
|
96 |
+ |
|
97 |
+ |
|
98 |
+How commit access is granted
|
|
99 |
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
100 |
+ |
|
101 |
+After someone has successfully contributed a few non-trivial patches, some full
|
|
102 |
+committer, usually whoever has reviewed and applied the most patches from that
|
|
103 |
+contributor, proposes them for commit access. This proposal is sent only to the
|
|
104 |
+other full committers -- the ensuing discussion is private, so that everyone can
|
|
105 |
+feel comfortable speaking their minds. Assuming there are no objections, the
|
|
106 |
+contributor is granted commit access. The decision is made by consensus; there
|
|
107 |
+are no formal rules governing the procedure, though generally if someone strongly
|
|
108 |
+objects the access is not offered, or is offered on a provisional basis.
|
|
109 |
+ |
|
110 |
+This of course relies on contributors being responsive and showing willingness
|
|
111 |
+to address any problems that may arise after landing patches. However, the primary
|
|
112 |
+criterion for commit access is good judgment.
|
|
113 |
+ |
|
114 |
+You do not have to be a technical wizard, or demonstrate deep knowledge of the
|
|
115 |
+entire codebase to become a committer. You just need to know what you don't
|
|
116 |
+know. If your patches adhere to the guidelines in this file, adhere to all the usual
|
|
117 |
+unquantifiable rules of coding (code should be readable, robust, maintainable, etc.),
|
|
118 |
+and respect the Hippocratic Principle of "first, do no harm", then you will probably
|
|
119 |
+get commit access pretty quickly. The size, complexity, and quantity of your patches
|
|
120 |
+do not matter as much as the degree of care you show in avoiding bugs and minimizing
|
|
121 |
+unnecessary impact on the rest of the code. Many full committers are people who have
|
|
122 |
+not made major code contributions, but rather lots of small, clean fixes, each of
|
|
123 |
+which was an unambiguous improvement to the code. (Of course, this does not mean the
|
|
124 |
+project needs a bunch of very trivial patches whose only purpose is to gain commit
|
|
125 |
+access; knowing what's worth a patch post and what's not is part of showing good
|
|
126 |
+judgement.)
|
|
127 |
+ |
|
128 |
+When submitting a merge request, please obtain a review from another committer
|
|
129 |
+who is familiar with the area of the code base which the branch effects. Asking on
|
|
130 |
+slack is probably the best way to go about this. An approval from a committer
|
|
131 |
+who is not the patch author will be needed before any merge (we use Gitlab's
|
|
132 |
+'approval' feature for this).
|
|
133 |
+ |
|
92 | 134 |
|
93 | 135 |
.. _coding-style:
|
94 | 136 |
|
... | ... | @@ -198,35 +240,6 @@ trunk. |
198 | 240 |
|
199 | 241 |
.. _coverage report: https://buildgrid.gitlab.io/buildgrid/coverage/
|
200 | 242 |
|
201 |
- |
|
202 |
-.. _committer-access:
|
|
203 |
- |
|
204 |
-Committer access
|
|
205 |
-----------------
|
|
206 |
- |
|
207 |
-We'll hand out commit access to anyone who has successfully landed a single
|
|
208 |
-patch to the code base. Please request this via Slack or the mailing list.
|
|
209 |
- |
|
210 |
-This of course relies on contributors being responsive and showing willingness
|
|
211 |
-to address any problems that may arise after landing branches.
|
|
212 |
- |
|
213 |
-When submitting a merge request, please obtain a review from another committer
|
|
214 |
-who is familiar with the area of the code base which the branch effects. An
|
|
215 |
-approval from another committer who is not the patch author will be needed
|
|
216 |
-before any merge (we use gitlab's 'approval' feature for this).
|
|
217 |
- |
|
218 |
-What we are expecting of committers here in general is basically to escalate the
|
|
219 |
-review in cases of uncertainty.
|
|
220 |
- |
|
221 |
-.. note::
|
|
222 |
- |
|
223 |
- We don't have any detailed policy for "bad actors", but will of course handle
|
|
224 |
- things on a case by case basis - commit access should not result in commit
|
|
225 |
- wars or be used as a tool to subvert the project when disagreements arise.
|
|
226 |
- Such incidents (if any) would surely lead to temporary suspension of commit
|
|
227 |
- rights.
|
|
228 |
- |
|
229 |
- |
|
230 | 243 |
.. _gitlab-features:
|
231 | 244 |
|
232 | 245 |
GitLab features
|
1 |
-Finn Ball
|
|
2 |
-E-mail: finn ball codethink co uk
|
|
3 |
-Userid: finnball
|
... | ... | @@ -13,6 +13,7 @@ |
13 | 13 |
# limitations under the License.
|
14 | 14 |
|
15 | 15 |
|
16 |
+import logging
|
|
16 | 17 |
import os
|
17 | 18 |
import subprocess
|
18 | 19 |
import tempfile
|
... | ... | @@ -29,7 +30,8 @@ def work_buildbox(context, lease): |
29 | 30 |
"""
|
30 | 31 |
local_cas_directory = context.local_cas
|
31 | 32 |
# instance_name = context.parent
|
32 |
- logger = context.logger
|
|
33 |
+ |
|
34 |
+ logger = logging.getLogger(__name__)
|
|
33 | 35 |
|
34 | 36 |
action_digest = remote_execution_pb2.Digest()
|
35 | 37 |
|
... | ... | @@ -13,6 +13,7 @@ |
13 | 13 |
# limitations under the License.
|
14 | 14 |
|
15 | 15 |
|
16 |
+import logging
|
|
16 | 17 |
import os
|
17 | 18 |
import subprocess
|
18 | 19 |
import tempfile
|
... | ... | @@ -26,7 +27,7 @@ def work_host_tools(context, lease): |
26 | 27 |
"""Executes a lease for a build action, using host tools.
|
27 | 28 |
"""
|
28 | 29 |
instance_name = context.parent
|
29 |
- logger = context.logger
|
|
30 |
+ logger = logging.getLogger(__name__)
|
|
30 | 31 |
|
31 | 32 |
action_digest = remote_execution_pb2.Digest()
|
32 | 33 |
action_result = remote_execution_pb2.ActionResult()
|
... | ... | @@ -20,7 +20,6 @@ Bot command |
20 | 20 |
Create a bot interface and request work
|
21 | 21 |
"""
|
22 | 22 |
|
23 |
-import logging
|
|
24 | 23 |
from pathlib import Path, PurePath
|
25 | 24 |
import sys
|
26 | 25 |
from urllib.parse import urlparse
|
... | ... | @@ -120,8 +119,7 @@ def cli(context, parent, update_period, remote, client_key, client_cert, server_ |
120 | 119 |
context.cas_client_cert = context.client_cert
|
121 | 120 |
context.cas_server_cert = context.server_cert
|
122 | 121 |
|
123 |
- context.logger = logging.getLogger(__name__)
|
|
124 |
- context.logger.debug("Starting for remote {}".format(context.remote))
|
|
122 |
+ click.echo("Starting for remote=[{}]".format(context.remote))
|
|
125 | 123 |
|
126 | 124 |
interface = bot_interface.BotInterface(context.channel)
|
127 | 125 |
|
... | ... | @@ -20,7 +20,6 @@ Execute command |
20 | 20 |
Request work to be executed and monitor status of jobs.
|
21 | 21 |
"""
|
22 | 22 |
|
23 |
-import logging
|
|
24 | 23 |
import os
|
25 | 24 |
import sys
|
26 | 25 |
from urllib.parse import urlparse
|
... | ... | @@ -63,8 +62,7 @@ def cli(context, remote, instance_name, client_key, client_cert, server_cert): |
63 | 62 |
|
64 | 63 |
context.channel = grpc.secure_channel(context.remote, credentials)
|
65 | 64 |
|
66 |
- context.logger = logging.getLogger(__name__)
|
|
67 |
- context.logger.debug("Starting for remote {}".format(context.remote))
|
|
65 |
+ click.echo("Starting for remote=[{}]".format(context.remote))
|
|
68 | 66 |
|
69 | 67 |
|
70 | 68 |
@cli.command('upload-dummy', short_help="Upload a dummy action. Should be used with `execute dummy-request`")
|
... | ... | @@ -75,7 +73,7 @@ def upload_dummy(context): |
75 | 73 |
action_digest = uploader.put_message(action)
|
76 | 74 |
|
77 | 75 |
if action_digest.ByteSize():
|
78 |
- click.echo('Success: Pushed digest "{}/{}"'
|
|
76 |
+ click.echo('Success: Pushed digest=["{}/{}]"'
|
|
79 | 77 |
.format(action_digest.hash, action_digest.size_bytes))
|
80 | 78 |
else:
|
81 | 79 |
click.echo("Error: Failed pushing empty message.", err=True)
|
... | ... | @@ -92,7 +90,7 @@ def upload_file(context, file_path, verify): |
92 | 90 |
for path in file_path:
|
93 | 91 |
if not os.path.isabs(path):
|
94 | 92 |
path = os.path.abspath(path)
|
95 |
- context.logger.debug("Queueing {}".format(path))
|
|
93 |
+ click.echo("Queueing path=[{}]".format(path))
|
|
96 | 94 |
|
97 | 95 |
file_digest = uploader.upload_file(path, queue=True)
|
98 | 96 |
|
... | ... | @@ -102,12 +100,12 @@ def upload_file(context, file_path, verify): |
102 | 100 |
for file_digest in sent_digests:
|
103 | 101 |
file_path = os.path.relpath(files_map[file_digest.hash])
|
104 | 102 |
if verify and file_digest.size_bytes != os.stat(file_path).st_size:
|
105 |
- click.echo('Error: Failed to verify "{}"'.format(file_path), err=True)
|
|
103 |
+ click.echo("Error: Failed to verify '{}'".format(file_path), err=True)
|
|
106 | 104 |
elif file_digest.ByteSize():
|
107 |
- click.echo('Success: Pushed "{}" with digest "{}/{}"'
|
|
105 |
+ click.echo("Success: Pushed path=[{}] with digest=[{}/{}]"
|
|
108 | 106 |
.format(file_path, file_digest.hash, file_digest.size_bytes))
|
109 | 107 |
else:
|
110 |
- click.echo('Error: Failed pushing "{}"'.format(file_path), err=True)
|
|
108 |
+ click.echo("Error: Failed pushing path=[{}]".format(file_path), err=True)
|
|
111 | 109 |
|
112 | 110 |
|
113 | 111 |
@cli.command('upload-dir', short_help="Upload a directory to the CAS server.")
|
... | ... | @@ -121,7 +119,7 @@ def upload_directory(context, directory_path, verify): |
121 | 119 |
for node, blob, path in merkle_tree_maker(directory_path):
|
122 | 120 |
if not os.path.isabs(path):
|
123 | 121 |
path = os.path.abspath(path)
|
124 |
- context.logger.debug("Queueing {}".format(path))
|
|
122 |
+ click.echo("Queueing path=[{}]".format(path))
|
|
125 | 123 |
|
126 | 124 |
node_digest = uploader.put_blob(blob, digest=node.digest, queue=True)
|
127 | 125 |
|
... | ... | @@ -134,12 +132,12 @@ def upload_directory(context, directory_path, verify): |
134 | 132 |
node_path = os.path.relpath(node_path)
|
135 | 133 |
if verify and (os.path.isfile(node_path) and
|
136 | 134 |
node_digest.size_bytes != os.stat(node_path).st_size):
|
137 |
- click.echo('Error: Failed to verify "{}"'.format(node_path), err=True)
|
|
135 |
+ click.echo("Error: Failed to verify path=[{}]".format(node_path), err=True)
|
|
138 | 136 |
elif node_digest.ByteSize():
|
139 |
- click.echo('Success: Pushed "{}" with digest "{}/{}"'
|
|
137 |
+ click.echo("Success: Pushed path=[{}] with digest=[{}/{}]"
|
|
140 | 138 |
.format(node_path, node_digest.hash, node_digest.size_bytes))
|
141 | 139 |
else:
|
142 |
- click.echo('Error: Failed pushing "{}"'.format(node_path), err=True)
|
|
140 |
+ click.echo("Error: Failed pushing path=[{}]".format(node_path), err=True)
|
|
143 | 141 |
|
144 | 142 |
|
145 | 143 |
def _create_digest(digest_string):
|
... | ... | @@ -160,8 +158,8 @@ def _create_digest(digest_string): |
160 | 158 |
@pass_context
|
161 | 159 |
def download_file(context, digest_string, file_path, verify):
|
162 | 160 |
if os.path.exists(file_path):
|
163 |
- click.echo('Error: Invalid value for "file-path": ' +
|
|
164 |
- 'Path "{}" already exists.'.format(file_path), err=True)
|
|
161 |
+ click.echo("Error: Invalid value for " +
|
|
162 |
+ "path=[{}] already exists.".format(file_path), err=True)
|
|
165 | 163 |
return
|
166 | 164 |
|
167 | 165 |
digest = _create_digest(digest_string)
|
... | ... | @@ -171,11 +169,11 @@ def download_file(context, digest_string, file_path, verify): |
171 | 169 |
if verify:
|
172 | 170 |
file_digest = create_digest(read_file(file_path))
|
173 | 171 |
if file_digest != digest:
|
174 |
- click.echo('Error: Failed to verify "{}"'.format(file_path), err=True)
|
|
172 |
+ click.echo("Error: Failed to verify path=[{}]".format(file_path), err=True)
|
|
175 | 173 |
return
|
176 | 174 |
|
177 | 175 |
if os.path.isfile(file_path):
|
178 |
- click.echo('Success: Pulled "{}" from digest "{}/{}"'
|
|
176 |
+ click.echo("Success: Pulled path=[{}] from digest=[{}/{}]"
|
|
179 | 177 |
.format(file_path, digest.hash, digest.size_bytes))
|
180 | 178 |
else:
|
181 | 179 |
click.echo('Error: Failed pulling "{}"'.format(file_path), err=True)
|
... | ... | @@ -190,8 +188,8 @@ def download_file(context, digest_string, file_path, verify): |
190 | 188 |
def download_directory(context, digest_string, directory_path, verify):
|
191 | 189 |
if os.path.exists(directory_path):
|
192 | 190 |
if not os.path.isdir(directory_path) or os.listdir(directory_path):
|
193 |
- click.echo('Error: Invalid value for "directory-path": ' +
|
|
194 |
- 'Path "{}" already exists.'.format(directory_path), err=True)
|
|
191 |
+ click.echo("Error: Invalid value, " +
|
|
192 |
+ "path=[{}] already exists.".format(directory_path), err=True)
|
|
195 | 193 |
return
|
196 | 194 |
|
197 | 195 |
digest = _create_digest(digest_string)
|
... | ... | @@ -204,11 +202,11 @@ def download_directory(context, digest_string, directory_path, verify): |
204 | 202 |
if node.DESCRIPTOR is remote_execution_pb2.DirectoryNode.DESCRIPTOR:
|
205 | 203 |
last_directory_node = node
|
206 | 204 |
if last_directory_node.digest != digest:
|
207 |
- click.echo('Error: Failed to verify "{}"'.format(directory_path), err=True)
|
|
205 |
+ click.echo("Error: Failed to verify path=[{}]".format(directory_path), err=True)
|
|
208 | 206 |
return
|
209 | 207 |
|
210 | 208 |
if os.path.isdir(directory_path):
|
211 |
- click.echo('Success: Pulled "{}" from digest "{}/{}"'
|
|
209 |
+ click.echo("Success: Pulled path=[{}] from digest=[{}/{}]"
|
|
212 | 210 |
.format(directory_path, digest.hash, digest.size_bytes))
|
213 | 211 |
else:
|
214 |
- click.echo('Error: Failed pulling "{}"'.format(directory_path), err=True)
|
|
212 |
+ click.echo("Error: Failed pulling path=[{}]".format(directory_path), err=True)
|
... | ... | @@ -20,7 +20,6 @@ Execute command |
20 | 20 |
Request work to be executed and monitor status of jobs.
|
21 | 21 |
"""
|
22 | 22 |
|
23 |
-import logging
|
|
24 | 23 |
import os
|
25 | 24 |
import stat
|
26 | 25 |
import sys
|
... | ... | @@ -64,8 +63,7 @@ def cli(context, remote, instance_name, client_key, client_cert, server_cert): |
64 | 63 |
|
65 | 64 |
context.channel = grpc.secure_channel(context.remote, credentials)
|
66 | 65 |
|
67 |
- context.logger = logging.getLogger(__name__)
|
|
68 |
- context.logger.debug("Starting for remote {}".format(context.remote))
|
|
66 |
+ click.echo("Starting for remote=[{}]".format(context.remote))
|
|
69 | 67 |
|
70 | 68 |
|
71 | 69 |
@cli.command('request-dummy', short_help="Send a dummy action.")
|
... | ... | @@ -76,7 +74,7 @@ def cli(context, remote, instance_name, client_key, client_cert, server_cert): |
76 | 74 |
@pass_context
|
77 | 75 |
def request_dummy(context, number, wait_for_completion):
|
78 | 76 |
|
79 |
- context.logger.info("Sending execution request...")
|
|
77 |
+ click.echo("Sending execution request...")
|
|
80 | 78 |
action = remote_execution_pb2.Action(do_not_cache=True)
|
81 | 79 |
action_digest = create_digest(action.SerializeToString())
|
82 | 80 |
|
... | ... | @@ -96,7 +94,7 @@ def request_dummy(context, number, wait_for_completion): |
96 | 94 |
result = None
|
97 | 95 |
for stream in response:
|
98 | 96 |
result = stream
|
99 |
- context.logger.info(result)
|
|
97 |
+ click.echo(result)
|
|
100 | 98 |
|
101 | 99 |
if not result.done:
|
102 | 100 |
click.echo("Result did not return True." +
|
... | ... | @@ -104,7 +102,7 @@ def request_dummy(context, number, wait_for_completion): |
104 | 102 |
sys.exit(-1)
|
105 | 103 |
|
106 | 104 |
else:
|
107 |
- context.logger.info(next(response))
|
|
105 |
+ click.echo(next(response))
|
|
108 | 106 |
|
109 | 107 |
|
110 | 108 |
@cli.command('command', short_help="Send a command to be executed.")
|
... | ... | @@ -132,12 +130,12 @@ def run_command(context, input_root, commands, output_file, output_directory): |
132 | 130 |
|
133 | 131 |
command_digest = uploader.put_message(command, queue=True)
|
134 | 132 |
|
135 |
- context.logger.info('Sent command: {}'.format(command_digest))
|
|
133 |
+ click.echo("Sent command=[{}]".format(command_digest))
|
|
136 | 134 |
|
137 | 135 |
# TODO: Check for missing blobs
|
138 | 136 |
input_root_digest = uploader.upload_directory(input_root)
|
139 | 137 |
|
140 |
- context.logger.info('Sent input: {}'.format(input_root_digest))
|
|
138 |
+ click.echo("Sent input=[{}]".format(input_root_digest))
|
|
141 | 139 |
|
142 | 140 |
action = remote_execution_pb2.Action(command_digest=command_digest,
|
143 | 141 |
input_root_digest=input_root_digest,
|
... | ... | @@ -145,7 +143,7 @@ def run_command(context, input_root, commands, output_file, output_directory): |
145 | 143 |
|
146 | 144 |
action_digest = uploader.put_message(action, queue=True)
|
147 | 145 |
|
148 |
- context.logger.info("Sent action: {}".format(action_digest))
|
|
146 |
+ click.echo("Sent action="">".format(action_digest))
|
|
149 | 147 |
|
150 | 148 |
request = remote_execution_pb2.ExecuteRequest(instance_name=context.instance_name,
|
151 | 149 |
action_digest=action_digest,
|
... | ... | @@ -154,7 +152,7 @@ def run_command(context, input_root, commands, output_file, output_directory): |
154 | 152 |
|
155 | 153 |
stream = None
|
156 | 154 |
for stream in response:
|
157 |
- context.logger.info(stream)
|
|
155 |
+ click.echo(stream)
|
|
158 | 156 |
|
159 | 157 |
execute_response = remote_execution_pb2.ExecuteResponse()
|
160 | 158 |
stream.response.Unpack(execute_response)
|
... | ... | @@ -21,7 +21,6 @@ Check the status of operations |
21 | 21 |
"""
|
22 | 22 |
|
23 | 23 |
from collections import OrderedDict
|
24 |
-import logging
|
|
25 | 24 |
from operator import attrgetter
|
26 | 25 |
from urllib.parse import urlparse
|
27 | 26 |
import sys
|
... | ... | @@ -67,8 +66,7 @@ def cli(context, remote, instance_name, client_key, client_cert, server_cert): |
67 | 66 |
|
68 | 67 |
context.channel = grpc.secure_channel(context.remote, credentials)
|
69 | 68 |
|
70 |
- context.logger = logging.getLogger(__name__)
|
|
71 |
- context.logger.debug("Starting for remote {}".format(context.remote))
|
|
69 |
+ click.echo("Starting for remote=[{}]".format(context.remote))
|
|
72 | 70 |
|
73 | 71 |
|
74 | 72 |
def _print_operation_status(operation, print_details=False):
|
... | ... | @@ -21,7 +21,6 @@ Create a BuildGrid server. |
21 | 21 |
"""
|
22 | 22 |
|
23 | 23 |
import asyncio
|
24 |
-import logging
|
|
25 | 24 |
import sys
|
26 | 25 |
|
27 | 26 |
import click
|
... | ... | @@ -35,7 +34,7 @@ from ..settings import parser |
35 | 34 |
@click.group(name='server', short_help="Start a local server instance.")
|
36 | 35 |
@pass_context
|
37 | 36 |
def cli(context):
|
38 |
- context.logger = logging.getLogger(__name__)
|
|
37 |
+ pass
|
|
39 | 38 |
|
40 | 39 |
|
41 | 40 |
@cli.command('start', short_help="Setup a new server instance.")
|
... | ... | @@ -61,7 +60,7 @@ def start(context, config): |
61 | 60 |
pass
|
62 | 61 |
|
63 | 62 |
finally:
|
64 |
- context.logger.info("Stopping server")
|
|
63 |
+ click.echo("Stopping server")
|
|
65 | 64 |
server.stop()
|
66 | 65 |
loop.close()
|
67 | 66 |
|
... | ... | @@ -23,19 +23,13 @@ from buildgrid._exceptions import NotFoundError |
23 | 23 |
from buildgrid._protos.build.bazel.remote.execution.v2 import remote_execution_pb2, remote_execution_pb2_grpc
|
24 | 24 |
from buildgrid._protos.google.bytestream import bytestream_pb2, bytestream_pb2_grpc
|
25 | 25 |
from buildgrid._protos.google.rpc import code_pb2
|
26 |
-from buildgrid.settings import HASH
|
|
26 |
+from buildgrid.settings import HASH, MAX_REQUEST_SIZE, MAX_REQUEST_COUNT
|
|
27 | 27 |
from buildgrid.utils import merkle_tree_maker
|
28 | 28 |
|
29 | 29 |
|
30 | 30 |
# Maximum size for a queueable file:
|
31 | 31 |
FILE_SIZE_THRESHOLD = 1 * 1024 * 1024
|
32 | 32 |
|
33 |
-# Maximum size for a single gRPC request:
|
|
34 |
-MAX_REQUEST_SIZE = 2 * 1024 * 1024
|
|
35 |
- |
|
36 |
-# Maximum number of elements per gRPC request:
|
|
37 |
-MAX_REQUEST_COUNT = 500
|
|
38 |
- |
|
39 | 33 |
|
40 | 34 |
class _CallCache:
|
41 | 35 |
"""Per remote grpc.StatusCode.UNIMPLEMENTED call cache."""
|
... | ... | @@ -390,11 +384,10 @@ class Downloader: |
390 | 384 |
assert digest.hash in directories
|
391 | 385 |
|
392 | 386 |
directory = directories[digest.hash]
|
393 |
- self._write_directory(digest.hash, directory_path,
|
|
387 |
+ self._write_directory(directory, directory_path,
|
|
394 | 388 |
directories=directories, root_barrier=directory_path)
|
395 | 389 |
|
396 | 390 |
directory_fetched = True
|
397 |
- |
|
398 | 391 |
except grpc.RpcError as e:
|
399 | 392 |
status_code = e.code()
|
400 | 393 |
if status_code == grpc.StatusCode.UNIMPLEMENTED:
|
... | ... | @@ -24,7 +24,7 @@ import logging |
24 | 24 |
from buildgrid._exceptions import InvalidArgumentError, NotFoundError, OutOfRangeError
|
25 | 25 |
from buildgrid._protos.google.bytestream import bytestream_pb2
|
26 | 26 |
from buildgrid._protos.build.bazel.remote.execution.v2 import remote_execution_pb2 as re_pb2
|
27 |
-from buildgrid.settings import HASH
|
|
27 |
+from buildgrid.settings import HASH, MAX_REQUEST_SIZE
|
|
28 | 28 |
|
29 | 29 |
|
30 | 30 |
class ContentAddressableStorageInstance:
|
... | ... | @@ -58,6 +58,34 @@ class ContentAddressableStorageInstance: |
58 | 58 |
|
59 | 59 |
return response
|
60 | 60 |
|
61 |
+ def get_tree(self, request):
|
|
62 |
+ storage = self._storage
|
|
63 |
+ response = re_pb2.GetTreeResponse()
|
|
64 |
+ |
|
65 |
+ root_digest = request.root_digest
|
|
66 |
+ page_size = request.page_size
|
|
67 |
+ |
|
68 |
+ def __get_tree(node_digest):
|
|
69 |
+ nonlocal response, page_size, request
|
|
70 |
+ |
|
71 |
+ if not page_size:
|
|
72 |
+ page_size = request.page_size
|
|
73 |
+ yield response
|
|
74 |
+ |
|
75 |
+ if response.ByteSize() >= (MAX_REQUEST_SIZE):
|
|
76 |
+ yield response
|
|
77 |
+ |
|
78 |
+ directory_from_digest = storage.get_message(node_digest, re_pb2.Directory)
|
|
79 |
+ page_size -= 1
|
|
80 |
+ response.directories.extend([directory_from_digest])
|
|
81 |
+ |
|
82 |
+ for directory in directory_from_digest.directories:
|
|
83 |
+ yield from __get_tree(directory.digest)
|
|
84 |
+ |
|
85 |
+ yield response
|
|
86 |
+ |
|
87 |
+ return __get_tree(root_digest)
|
|
88 |
+ |
|
61 | 89 |
|
62 | 90 |
class ByteStreamInstance:
|
63 | 91 |
|
... | ... | @@ -87,10 +87,16 @@ class ContentAddressableStorageService(remote_execution_pb2_grpc.ContentAddressa |
87 | 87 |
def GetTree(self, request, context):
|
88 | 88 |
self.__logger.debug("GetTree request from [%s]", context.peer())
|
89 | 89 |
|
90 |
- context.set_code(grpc.StatusCode.UNIMPLEMENTED)
|
|
91 |
- context.set_details('Method not implemented!')
|
|
90 |
+ try:
|
|
91 |
+ instance = self._get_instance(request.instance_name)
|
|
92 |
+ yield from instance.get_tree(request)
|
|
93 |
+ |
|
94 |
+ except InvalidArgumentError as e:
|
|
95 |
+ self.__logger.error(e)
|
|
96 |
+ context.set_details(str(e))
|
|
97 |
+ context.set_code(grpc.StatusCode.INVALID_ARGUMENT)
|
|
92 | 98 |
|
93 |
- return iter([remote_execution_pb2.GetTreeResponse()])
|
|
99 |
+ yield remote_execution_pb2.GetTreeResponse()
|
|
94 | 100 |
|
95 | 101 |
def _get_instance(self, instance_name):
|
96 | 102 |
try:
|
... | ... | @@ -4,3 +4,9 @@ import hashlib |
4 | 4 |
# The hash function that CAS uses
|
5 | 5 |
HASH = hashlib.sha256
|
6 | 6 |
HASH_LENGTH = HASH().digest_size * 2
|
7 |
+ |
|
8 |
+# Maximum size for a single gRPC request:
|
|
9 |
+MAX_REQUEST_SIZE = 2 * 1024 * 1024
|
|
10 |
+ |
|
11 |
+# Maximum number of elements per gRPC request:
|
|
12 |
+MAX_REQUEST_COUNT = 500
|
1 |
+#define HELLO_WORLD "Hello, World!"
|
... | ... | @@ -39,16 +39,31 @@ MESSAGES = [ |
39 | 39 |
]
|
40 | 40 |
DATA_DIR = os.path.join(
|
41 | 41 |
os.path.dirname(os.path.realpath(__file__)), 'data')
|
42 |
+ |
|
43 |
+HELLO_DIR = os.path.join(DATA_DIR, 'hello')
|
|
44 |
+HELLO2_DIR = os.path.join(HELLO_DIR, 'hello2')
|
|
45 |
+HELLO3_DIR = os.path.join(HELLO_DIR, 'hello3')
|
|
46 |
+HELLO4_DIR = os.path.join(HELLO3_DIR, 'hello4')
|
|
47 |
+HELLO5_DIR = os.path.join(HELLO4_DIR, 'hello5')
|
|
48 |
+ |
|
42 | 49 |
FILES = [
|
43 | 50 |
(os.path.join(DATA_DIR, 'void'),),
|
44 | 51 |
(os.path.join(DATA_DIR, 'hello.cc'),),
|
45 | 52 |
(os.path.join(DATA_DIR, 'hello', 'hello.c'),
|
46 |
- os.path.join(DATA_DIR, 'hello', 'hello.h'))]
|
|
53 |
+ os.path.join(DATA_DIR, 'hello', 'hello.sh')),
|
|
54 |
+ (os.path.join(HELLO2_DIR, 'hello.h'),),
|
|
55 |
+ (os.path.join(HELLO5_DIR, 'hello.h'),), ]
|
|
56 |
+ |
|
47 | 57 |
FOLDERS = [
|
48 |
- (os.path.join(DATA_DIR, 'hello'),)]
|
|
58 |
+ (HELLO_DIR, HELLO2_DIR, HELLO3_DIR, HELLO4_DIR, HELLO5_DIR)]
|
|
59 |
+ |
|
49 | 60 |
DIRECTORIES = [
|
50 |
- (os.path.join(DATA_DIR, 'hello'),),
|
|
51 |
- (os.path.join(DATA_DIR, 'hello'), DATA_DIR)]
|
|
61 |
+ (HELLO_DIR,),
|
|
62 |
+ (DATA_DIR,),
|
|
63 |
+ (HELLO2_DIR,),
|
|
64 |
+ (HELLO3_DIR,),
|
|
65 |
+ (HELLO4_DIR,),
|
|
66 |
+ (HELLO5_DIR,), ]
|
|
52 | 67 |
|
53 | 68 |
|
54 | 69 |
@pytest.mark.parametrize('blobs', BLOBS)
|