You must log in or register to comment.

southerntofu wrote

TLDR: you can't.

Proving that your own machine is doing computations right is already a complex problem:

  • "trusting trust" is a famous paper (from decades ago) introducing the idea that a compromised compiler can subtly alter programs and reproduce itself like a virus; this gave birth to the field of reproducible builds and bootstrappable builds
  • "formal verification" is an established area of computing where the operations done by the program (eg. in Ada language) are verified by math proofs so there's shouldn't be logic errors, though this is mostly used in specialized fields like aeronautics
  • there's also a bunch of research over the years about program "correctness" and preventing "undefined behavior" ; most formal verification programming environment will to some extent deal with this as well
  • there's unfortunately outstanding attacks against the hardware, where supposedly-harmless code will trigger paths on the CPU which will lead to bits being flipped and whatnot (see Spectre & others, and why systems like openBSD disabled hyperthreading entirely) ; that's the subtle attacks, but "'evil maid" and "supply chain" attacks are considerably easier to pull off for determined actors
  • the firmware (the many "operating systems" running on all the tiny pieces of hardware like wifi/bluetooth cards, GSM modem, USB keys) is also a huge surface for attack; even firmware used in hardware dedicated to security (Intel SGX) has been breached and even used (in the lab) to protect malware from the operating system's defenses

Proving someone else is doing it correctly for you is entirely impossible. So that's why you can never trust some one who says:

Every connected service you use will either trust an explicit number of actors (for example, the Browser CA consortium for https) or destroy the entire planet trying to replace human trust with raw computational power ("the majority of world-wide computing power must be right", or the dictatorship of the majority alla Bitcoin). Making trust models explicit is a very important aspect of software and Internet specifications (RFCs). That's why every internet standard that i know of has a "Security" section. That's why Riseup and other militant hosting coops will take time to explain the tradeoffs of their security measures (the threat model) and how you can protect yourself some more.

There's also research on "capabilities" (for example with OCAP) to deal with some of those concerns, but i'm not really familiar with these approaches.


edmund_the_destroyer wrote

To wander off into academic territory, my understanding is that there is progress being made. You mentioned "capabilities" like OCAP, which is one path - Christopher Lemmer Webber has been doing work on on distributed trust-less computing with his Spritely project ( )

Crypto-currency is an abomination and a form of pyramid scheme and gambling. But the core technology, blockchain, allows for a form of distributed provable computation. My understanding, though, is that blockchains can't be used to prove any type of computation, just certain types. So I might be able to create a blockchain that proves that person A sent 5 and person B sent 3 and computer X running the software computed that the result was 8. But - again, as far as I understand it - nobody can create a blockchain that proves that computer X didn't also send the 5, 3, and 8 somewhere else. That is, blockchain can be used to prove some things a computer program did but it can't capture and prove a complete record of everything a computer program did.


southerntofu wrote

Spritely project ( )

Yes i'm following this project it's really interesting though i have a very weak understanding of how such "secure remote computation" can take place.

blockchain, allows for a form of distributed provable computation

Well a blockchain is merely a linked list of data. This data can contain programs/contracts as Ethereum is doing, however who gets to decide what block is next on the chain is up to individual implementation and their own trust model (usually majority wins).

The least-authority (permissionless, or anarchist) alternative is the DHT. Distributed Hash Tables are decentralized databases, but contrary to blockchains they are not an ordered list of items (with restriction on how to push something) but a loose collection items anyone in the network can publish.

Examples of DHTs include: Bittorrent, IPFS..


edmund_the_destroyer wrote

But the crucial bit is that a blockchain is public-key-signed transactions on top of a DHT. So I might not be able to make a particular transaction happen on a blockchain, but if it lists a transaction as having occurred then it did occur. So it doesn't prove everything and it can do things (good, bad, or neutral) in addition to proofs, but the things it lists as proved are actually proved.


yam wrote

You could fuzz for hidden API endpoints, and you could do comparative timings on requests to know API endpoints to see if they diverge from relative timings from running the service locally.


CameronNemo wrote

The timing could be affected by so much, though. If they used a cloud instance to host, it might be easier to reproduce. But local hardware would be a guesstimate at best...


throwaway wrote

Having hidden endpoints would be beyond stupid, if they were trying to appear as though they are running free software.

The thing about timing is an interesting thought, but I really doubt it's of any practical use. It's simply not reliable enough, too many factors can affect the timing.


Hibiscus_Syrup wrote

I liked that you asked this question, I'd never thought of it! If we'd still been doing Sunday Spotlight I woulda put this in there.