protocol buffers - Google protobuf and large binary blobs -


i'm building software remotely control radio hardware attached pc.

i plan use zeromq transport , rpc-like request-reply different messages on top of represent operations.

while of messages control , status information, there should option set blob of data transmit or request blob of data receive. these data blobs in range of 5-10mb should possible use larger blobs several 100mb.

for message format, found google protocol buffers appealing because define 1 message type on transport link has optional elements commands , responses. however, protobuf faq states such large messages negatively impact performance.

so question is, how bad be? negative effects there expect? don't want base whole communications on protobuf find out doesn't work.

i don't have time you, browse protobuf source code. better yet, go ahead , write code using large bytes field, build protobuf source, , step through in debugger see happens when send , receive large blobs.

from experience, can tell large repeated message fields not efficient unless have [packed=true] attribute, works primitive types.

my gut feeling large bytes fields efficient, totally unsubstantiated.

you bypass protobuf large blobs:

message blobinfo {     required fixed64 size;     ... }  message mainformat {     ...     optional blobinfo blob; } 

then parsing code looks like:

... if (message.has_blob()) {     uint64_t size = msg.blob()->size();     zmqsock.recv(blob_buffer, size); } 

Comments

Popular posts from this blog

Android layout hidden on keyboard show -

google app engine - 403 Forbidden POST - Flask WTForms -

c - Why would PK11_GenerateRandom() return an error -8023? -