5
0
mirror of https://github.com/cwinfo/yggdrasil-network.github.io.git synced 2024-09-19 00:59:36 +00:00

Fix typos

This commit is contained in:
Dimitris Apostolou 2019-11-29 12:49:08 +02:00 committed by GitHub
parent a3c4fdf3ad
commit ae946a5aa3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 14 additions and 14 deletions

View File

@ -100,7 +100,7 @@ So full connection process looks something like the following:
5. The node checks that the destination `NodeID` and bitmask match the `NodeID` of the closest node (if not, then it means the destination node doesn't exist / is offline / is unreachable while the network re-converges due some disruption).
6. The node sends a session ping to the destination.
7. The node receives a session pong from the destination, learning their public ephemeral key.
8. The nodes can now send regular IPv6 traffic to eachother, encrypted with the ephemeral shared secret, using the session's cached `coords` to avoid future lookups (unless the session is unresponsive for too long, in which case any new sends will also trigger a ping, or a new DHT lookup if the pings fail).
8. The nodes can now send regular IPv6 traffic to each other, encrypted with the ephemeral shared secret, using the session's cached `coords` to avoid future lookups (unless the session is unresponsive for too long, in which case any new sends will also trigger a ping, or a new DHT lookup if the pings fail).
### Conclusion

View File

@ -27,7 +27,7 @@ This post attempts to explain Yggdrasil's congestion control mechanism, why past
The first thing to try is not to implement any explicit buffering in Yggdrasil.
Packets are received from a socket, we look up where the packet needs to go next, and then we send on that socket.
This immediately leads to blocking network operations and poor performance, so we need need separate read and write threads (goroutines, in our case).
This immediately leads to blocking network operations and poor performance, so we need separate read and write threads (goroutines, in our case).
Initially, we used buffered channels and non-blocking channel sends.
This means that, instead of the reader goroutine writing to the socket to send, it would pass it to a channel which a dedicated writer goroutine would read from.
The problem with this approach is that Go channels with non-blocking sends are [FIFO](https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)) and [tail dropped](https://en.wikipedia.org/wiki/Tail_drop).
@ -56,7 +56,7 @@ What we want is for multiple streams of traffic to be handled independently, to
Then, we can reward different traffic streams to prioritize based on lowest bandwidth (i.e. size of queue / age of oldest packet in queue, with a separate queue per traffic stream).
Now we let traffic streams compete for bandwidth.
The winning strategy, to get more bandwidth during times of congestion, is to attempt to use *less* bandwidth, which I argue is exactly the behavior we want to encourage.
Streams of traffic that play nice get a fair share of bandwidth, which includes pretty much every sane TCP implementation, and streams that flood goto timeout.
Streams of traffic that play nice get a fair share of bandwidth, which includes pretty much every sane TCP implementation, and streams that flood go to timeout.
### Yggdrasil's congestion control
@ -99,7 +99,7 @@ Still, because we won't really know without trying, adding the required new pack
Yggdrasil has gone through a number of different congestion control mechanisms since the TCP link layer was introduced.
The current congestion control mechanism rewards traffic streams which utilize less bandwidth by prioritizing them higher than streams using more bandwidth.
Cooperative stream obtain a fair share of bandwidth, while stream which attempt to use more than their fair share are given lower priority, and are forced to throttle down as a result.
Cooperative streams obtain a fair share of bandwidth, while streams which attempt to use more than their fair share are given lower priority, and are forced to throttle down as a result.
When packet drops become necessary, a random drop mechanism is used which penalizes large queues the most, which should signal congestion to the worst offenders.
Much of this is a precursor to backpressure routing, which, if it works out in practice as well as it does on paper, should give the network a nearly-optimal latency/bandwidth trade-off.

View File

@ -56,7 +56,7 @@ Then, when a node needs to forward a packet, it checks the tree location of each
This is explained in more detail in earlier blog posts, if you're not familiar with how Yggdrasil routes and care to read more.
In our package delivery example, imagine if the streets in Alice's town were laid out in a grid, and then named and numbered systematically by blocks, with street signs to label where any off-grid bypasses go.
Alice and friends still haven't bought maps, but they they know each other's *addresses* instead.
Alice and friends still haven't bought maps, but they know each other's *addresses* instead.
So, if Alice wants to contact Carol, she first travels to Bob's house and asks him for Carol's address.
Now, when she wants to deliver a package to Carol, she can simply follow the block structure of the town until she arrives on Carol's block, and she has the option to take any bypass she happens to come across if it brings her closer to Carol's place.
That's basically how routing on the tree, or taking an off-tree shortcut, work in Yggdrasil's greedy routing scheme, except with a tree instead of a grid (which, in addition to working everywhere, seems to work *well* in the places we care about).

View File

@ -28,7 +28,7 @@ In addition, the number of peers you want to add depends on what you want to do.
### What happens when things go wrong
Lets imagine we have some nodes in New York, and initially they follow the peering rules outlined above. Now suppose that two of these nodes decide that they want to add connections to London. In Yggdrasil, nodes tend to select parents that minimize latency to the root, which happens to be a node in Paris at the time I'm writing this. As a result, both of the NY nodes are likely to select their respective London peers as their parents. If the nodes are following the peering rules, then at least one of them has also decided to peer with the other, so they have a shortcut they can use to talk to each-other (or any descendants in the tree).
Let's imagine we have some nodes in New York, and initially they follow the peering rules outlined above. Now suppose that two of these nodes decide that they want to add connections to London. In Yggdrasil, nodes tend to select parents that minimize latency to the root, which happens to be a node in Paris at the time I'm writing this. As a result, both of the NY nodes are likely to select their respective London peers as their parents. If the nodes are following the peering rules, then at least one of them has also decided to peer with the other, so they have a shortcut they can use to talk to each-other (or any descendants in the tree).
However, if they ignore the peering rules and *don't* peer with each other, then they are likely to route through London instead of communicating over their local mesh network. A shorter path exists, through their local mesh network, but it's not one that the network *must* know about for routing to work, so they won't necessarily know about it. As a result, the latency between these two nodes (or decedents thereof) will likely be an order of magnitude more than it needs to be (and probably lower bandwidth as well).

View File

@ -204,7 +204,7 @@ particular density due to only having a limited number of Macs to hand.
One thing that I did notice though is that, while AWDL is active, my wireless
connection to my home Wi-Fi network does reduce in speed somewhat. This is to be
expected, given that the wireless chipset is hopping between channels rather
than spending all of it's time on a single channel.
than spending all of its time on a single channel.
Sadly we weren't able to reproduce this test using iOS Testflight builds of
Yggdrasil. On iOS, we implement Yggdrasil as a VPN service which is subject to a

View File

@ -52,7 +52,7 @@ Different implementations differ on details after that, such as what order messa
<!-- a play on "Turning point", aka the Climax of a classic 5-act play structure, which is what this post's structure is modeled after if you hadn't figured it out by this point -->
I'm particularly fond of the [pony](https://ponylang.io) programming language's take on the actor model. I really can't being to say enough nice things about their approach, and fully describing it is beyond the scope of this blog post, but if you come out of here with an interest in the actor model, then I highly recommend checking out that language. Maybe watch a few of the talks from the developers that have been posted to youtube, or read their papers about what is *easily* the most promising approach to garbage collection I've ever come across.
I'm particularly fond of the [pony](https://ponylang.io) programming language's take on the actor model. I really can't say enough nice things about their approach, and fully describing it is beyond the scope of this blog post, but if you come out of here with an interest in the actor model, then I highly recommend checking out that language. Maybe watch a few of the talks from the developers that have been posted to YouTube, or read their papers about what is *easily* the most promising approach to garbage collection I've ever come across.
Anyway, I don't actually work on anything written in pony, but I like their version of the actor model so much that I decided to see if I could trick Go's runtime into faking it. The result is [`phony`](https://github.com/Arceliar/phony), which manages to do most of what I want in under 70 lines of code. When we write code using this asynchronous message passing style, instead of ordinary goroutines+channels, the implications are pretty significant:
@ -163,7 +163,7 @@ And that's about it. The first argument to `myActor.RunTheFunction` also `nil`ab
What's great is that we don't need to think about starting or stopping workers, deadlocks and leaks are not possible outside of blocking operations (e.g. I/O), and we can add or reuse behaviors just as easily as any function. I find the code easier to read and reason about too.
I/O is one rough spot, since an `Actor` can block on a `Read` or a `Write` and not process incoming messages as a result. This isn't really any worse than working with normal Go code, and the pattern we've adopted is to have separate `Actor`s for `Read` and `Write`, where one mostly just sits in a `Read` loop and sends the results (and/or error) somewhere whenever a `Read` finishes. These two workers can be children of some parent `Actor`, which is the only one the rest of the code needs to know about, and then all we need to remember to do is close the `ReadWriteCloser` (e.g. socket) at some point when we're done. This is the sort of thing that we'll eventually want to write a standard `struct` for, update our code everywhere to use it, and then never have to think about it again. In the mean time, we have a couple of very similar implementations for working with sockets or the tun/tap device.
I/O is one rough spot, since an `Actor` can block on a `Read` or a `Write` and not process incoming messages as a result. This isn't really any worse than working with normal Go code, and the pattern we've adopted is to have separate `Actor`s for `Read` and `Write`, where one mostly just sits in a `Read` loop and sends the results (and/or error) somewhere whenever a `Read` finishes. These two workers can be children of some parent `Actor`, which is the only one the rest of the code needs to know about, and then all we need to remember to do is close the `ReadWriteCloser` (e.g. socket) at some point when we're done. This is the sort of thing that we'll eventually want to write a standard `struct` for, update our code everywhere to use it, and then never have to think about it again. In the meantime, we have a couple of very similar implementations for working with sockets or the tun/tap device.
### Dénouement

View File

@ -252,7 +252,7 @@ Returns:
#### `removeRoute`
Expects:
- `subnet=` `string` for the subnet to remove the route route for
- `subnet=` `string` for the subnet to remove the route for
- `box_pub_key=` `string` for the public key that is routed to
Removes an existing crypto-key route.

View File

@ -66,7 +66,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
### Changed
- On recent Linux kernels, Yggdrasil will now set the `tcp_congestion_control` algorithm used for its own TCP sockets to [BBR](https://github.com/google/bbr), which reduces latency under load
- The systemd service configuration in `contrib` (and, by extension, some of our packages) now attemps to load the `tun` module, in case TUN/TAP support is available but not loaded, and it restricts Yggdrasil to the `CAP_NET_ADMIN` capability for managing the TUN/TAP adapter, rather than letting it do whatever the (typically `root`) user can do
- The systemd service configuration in `contrib` (and, by extension, some of our packages) now attempts to load the `tun` module, in case TUN/TAP support is available but not loaded, and it restricts Yggdrasil to the `CAP_NET_ADMIN` capability for managing the TUN/TAP adapter, rather than letting it do whatever the (typically `root`) user can do
### Fixed
- The `yggdrasil.Conn.RemoteAddr()` function no longer blocks, fixing a deadlock when CKR is used while under heavy load
@ -180,7 +180,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
## [0.3.4] - 2019-03-12
### Added
- Support for multiple listeners (although currently only TCP listeners are supported)
- New multicast behaviour where each multicast interface is given it's own link-local listener and does not depend on the `Listen` configuration
- New multicast behaviour where each multicast interface is given its own link-local listener and does not depend on the `Listen` configuration
- Blocking detection in the switch to avoid parenting a blocked peer
- Support for adding and removing listeners and multicast interfaces when reloading configuration during runtime
- Yggdrasil will now attempt to clean up UNIX admin sockets on startup if left behind by a previous crash
@ -374,7 +374,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
- Wire format changes (backwards incompatible).
- Less maintenance traffic per peer.
- Exponential back-off for DHT maintenance traffic (less maintenance traffic for known good peers).
- Iterative DHT (added some time between v0.1.0 and here).
- Iterative DHT (added sometime between v0.1.0 and here).
- Use local queue sizes for a sort of local-only backpressure routing, instead of the removed bandwidth estimates, when deciding where to send a packet.
### Removed

View File

@ -371,7 +371,7 @@ interface eth0
```
Note that a `/64` prefix has fewer bits of address space available to check against the node's ID, which in turn means hash collisions are more likely.
As such, it is unwise to rely on addresses as a form of identify verification for the `300::/8` address range.
As such, it is unwise to rely on addresses as a form of identity verification for the `300::/8` address range.
## Generating Stronger Addresses (and Prefixes)