mirror of
https://github.com/cwinfo/yggdrasil-go.git
synced 2024-11-26 04:51:38 +00:00
commit
8c29f4b6dc
18
README.md
18
README.md
@ -8,15 +8,14 @@
|
|||||||
This is a toy implementation of an encrypted IPv6 network, with many good ideas stolen from [cjdns](https://github.com/cjdelisle/cjdns), which was written to test a particular routing scheme that was cobbled together one random afternoon.
|
This is a toy implementation of an encrypted IPv6 network, with many good ideas stolen from [cjdns](https://github.com/cjdelisle/cjdns), which was written to test a particular routing scheme that was cobbled together one random afternoon.
|
||||||
It's notably not a shortest path routing scheme, with the goal of scalable name-independent routing on dynamic networks with an internet-like topology.
|
It's notably not a shortest path routing scheme, with the goal of scalable name-independent routing on dynamic networks with an internet-like topology.
|
||||||
It's named Yggdrasil after the world tree from Norse mythology, because that seemed like the obvious name given how it works.
|
It's named Yggdrasil after the world tree from Norse mythology, because that seemed like the obvious name given how it works.
|
||||||
For a longer, rambling version of this readme with more information, see: [doc](doc/README.md).
|
More information is available at <https://yggdrasil-network.github.io/>.
|
||||||
A very early incomplete draft of a [whitepaper](doc/Whitepaper.md) describing the protocol is also available.
|
|
||||||
|
|
||||||
This is a toy / proof-of-principle, so it's not even alpha quality software--any nontrivial update is likely to break backwards compatibility with no possibility for a clean upgrade path.
|
This is a toy / proof-of-principle, and considered alpha quality by the developers. It's not expected to be feature complete, and future updates may not be backwards compatible, though it should warn you if it sees a connection attempt with a node running a newer version.
|
||||||
You're encouraged to play with it, but it is strongly advised not to use it for anything mission critical.
|
You're encouraged to play with it, but it is strongly advised not to use it for anything mission critical.
|
||||||
|
|
||||||
## Building
|
## Building
|
||||||
|
|
||||||
1. Install Go (tested on 1.9+, [godeb](https://github.com/niemeyer/godeb) is recommended).
|
1. Install Go (tested on 1.9+, [godeb](https://github.com/niemeyer/godeb) is recommended for debian-based linux distributions).
|
||||||
2. Clone this repository.
|
2. Clone this repository.
|
||||||
2. `./build`
|
2. `./build`
|
||||||
|
|
||||||
@ -44,10 +43,9 @@ In practice, you probably want to run this instead:
|
|||||||
|
|
||||||
This keeps a persistent set of keys (and by extension, IP address) and gives you the option of editing the configuration file.
|
This keeps a persistent set of keys (and by extension, IP address) and gives you the option of editing the configuration file.
|
||||||
If you want to use it as an overlay network on top of e.g. the internet, then you can do so by adding the remote devices domain/address and port (as a string, e.g. `"1.2.3.4:5678"`) to the list of `Peers` in the configuration file.
|
If you want to use it as an overlay network on top of e.g. the internet, then you can do so by adding the remote devices domain/address and port (as a string, e.g. `"1.2.3.4:5678"`) to the list of `Peers` in the configuration file.
|
||||||
You can control whether or not it peers over TCP or UDP by adding `tcp://` or `udp://` to the start of the string, i.e. `"udp://1.2.3.4:5678"`.
|
By default, it peers over TCP (which can be forced with `"tcp://1.2.3.4:5678"` syntax), but it's also possible to connect over a socks proxy (`"socks://socksHost:socksPort/1.2.3.4:5678"`).
|
||||||
It is also possible to route outgoing TCP connections through a socks proxy using the syntax: `"socks://socksHost:socksPort/destHost:destPort"`.
|
The socks proxy approach is useful for e.g. [peering over tor hidden services](https://github.com/yggdrasil-network/public-peers/blob/master/other/tor.md).
|
||||||
It is currently configured to accept incoming TCP and UDP connections.
|
UDP support was removed as part of v0.2, and may be replaced by a better implementation at a later date.
|
||||||
In the interest of testing the TCP machinery, it's set to create TCP connections for auto-peering (over link-local IPv6), and to use TCP by default if no transport is specified for a manually configured peer.
|
|
||||||
|
|
||||||
### Platforms
|
### Platforms
|
||||||
|
|
||||||
@ -130,11 +128,11 @@ interface eth0
|
|||||||
};
|
};
|
||||||
```
|
```
|
||||||
|
|
||||||
This is enough to give unsupported devices on the LAN access to the yggdrasil network, with a few security and performance cautions outlined in the [doc](doc/README.md) file.
|
This is enough to give unsupported devices on the LAN access to the yggdrasil network. See the [configuration](https://yggdrasil-network.github.io/configuration.html) page for more info.
|
||||||
|
|
||||||
## How does it work?
|
## How does it work?
|
||||||
|
|
||||||
I'd rather not try to explain in the readme, but it is described further in the [doc](doc/README.md) file or the very draft of a [whitepaper](doc/Whitepaper.md), so you can check there if you're interested.
|
I'd rather not try to explain in the readme, but it is described further on the [about](https://yggdrasil-network.github.io/about.html) page, so you can check there if you're interested.
|
||||||
Be warned that it's still not a very good explanation, but it at least gives a high-level overview and links to some relevant work by other people.
|
Be warned that it's still not a very good explanation, but it at least gives a high-level overview and links to some relevant work by other people.
|
||||||
|
|
||||||
## Obligatory performance propaganda
|
## Obligatory performance propaganda
|
||||||
|
2
build
2
build
@ -5,7 +5,7 @@ go get -d -v
|
|||||||
go get -d -v yggdrasil
|
go get -d -v yggdrasil
|
||||||
for file in *.go ; do
|
for file in *.go ; do
|
||||||
echo "Building: $file"
|
echo "Building: $file"
|
||||||
go build -v $file
|
go build $@ $file
|
||||||
#go build -ldflags="-s -w" -v $file
|
#go build -ldflags="-s -w" -v $file
|
||||||
#upx --brute ${file/.go/}
|
#upx --brute ${file/.go/}
|
||||||
done
|
done
|
||||||
|
@ -54,10 +54,12 @@ func linkNodes(m, n *Node) {
|
|||||||
// Create peers
|
// Create peers
|
||||||
// Buffering reduces packet loss in the sim
|
// Buffering reduces packet loss in the sim
|
||||||
// This slightly speeds up testing (fewer delays before retrying a ping)
|
// This slightly speeds up testing (fewer delays before retrying a ping)
|
||||||
|
pLinkPub, pLinkPriv := m.core.DEBUG_newBoxKeys()
|
||||||
|
qLinkPub, qLinkPriv := m.core.DEBUG_newBoxKeys()
|
||||||
p := m.core.DEBUG_getPeers().DEBUG_newPeer(n.core.DEBUG_getEncryptionPublicKey(),
|
p := m.core.DEBUG_getPeers().DEBUG_newPeer(n.core.DEBUG_getEncryptionPublicKey(),
|
||||||
n.core.DEBUG_getSigningPublicKey())
|
n.core.DEBUG_getSigningPublicKey(), *m.core.DEBUG_getSharedKey(pLinkPriv, qLinkPub))
|
||||||
q := n.core.DEBUG_getPeers().DEBUG_newPeer(m.core.DEBUG_getEncryptionPublicKey(),
|
q := n.core.DEBUG_getPeers().DEBUG_newPeer(m.core.DEBUG_getEncryptionPublicKey(),
|
||||||
m.core.DEBUG_getSigningPublicKey())
|
m.core.DEBUG_getSigningPublicKey(), *n.core.DEBUG_getSharedKey(qLinkPriv, pLinkPub))
|
||||||
DEBUG_simLinkPeers(p, q)
|
DEBUG_simLinkPeers(p, q)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@ -160,17 +162,13 @@ func testPaths(store map[[32]byte]*Node) bool {
|
|||||||
ttl := ^uint64(0)
|
ttl := ^uint64(0)
|
||||||
oldTTL := ttl
|
oldTTL := ttl
|
||||||
for here := source; here != dest; {
|
for here := source; here != dest; {
|
||||||
if ttl == 0 {
|
|
||||||
fmt.Println("Drop:", source.index, here.index, dest.index, oldTTL)
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
temp++
|
temp++
|
||||||
if temp > 4096 {
|
if temp > 4096 {
|
||||||
panic("Loop?")
|
fmt.Println("Loop?")
|
||||||
|
time.Sleep(time.Second)
|
||||||
|
return false
|
||||||
}
|
}
|
||||||
oldTTL = ttl
|
nextPort := here.core.DEBUG_switchLookup(coords)
|
||||||
nextPort, newTTL := here.core.DEBUG_switchLookup(coords, ttl)
|
|
||||||
ttl = newTTL
|
|
||||||
// First check if "here" is accepting packets from the previous node
|
// First check if "here" is accepting packets from the previous node
|
||||||
// TODO explain how this works
|
// TODO explain how this works
|
||||||
ports := here.core.DEBUG_getPeers().DEBUG_getPorts()
|
ports := here.core.DEBUG_getPeers().DEBUG_getPorts()
|
||||||
@ -201,12 +199,16 @@ func testPaths(store map[[32]byte]*Node) bool {
|
|||||||
source.index, source.core.DEBUG_getLocator(),
|
source.index, source.core.DEBUG_getLocator(),
|
||||||
here.index, here.core.DEBUG_getLocator(),
|
here.index, here.core.DEBUG_getLocator(),
|
||||||
dest.index, dest.core.DEBUG_getLocator())
|
dest.index, dest.core.DEBUG_getLocator())
|
||||||
here.core.DEBUG_getSwitchTable().DEBUG_dumpTable()
|
//here.core.DEBUG_getSwitchTable().DEBUG_dumpTable()
|
||||||
}
|
}
|
||||||
if here != source {
|
if here != source {
|
||||||
// This is sufficient to check for routing loops or blackholes
|
// This is sufficient to check for routing loops or blackholes
|
||||||
//break
|
//break
|
||||||
}
|
}
|
||||||
|
if here == next {
|
||||||
|
fmt.Println("Drop:", source.index, here.index, dest.index, oldTTL)
|
||||||
|
return false
|
||||||
|
}
|
||||||
here = next
|
here = next
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -227,7 +229,7 @@ func stressTest(store map[[32]byte]*Node) {
|
|||||||
start := time.Now()
|
start := time.Now()
|
||||||
for _, source := range store {
|
for _, source := range store {
|
||||||
for _, coords := range dests {
|
for _, coords := range dests {
|
||||||
source.core.DEBUG_switchLookup(coords, ^uint64(0))
|
source.core.DEBUG_switchLookup(coords)
|
||||||
lookups++
|
lookups++
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -379,12 +381,12 @@ func dumpDHTSize(store map[[32]byte]*Node) {
|
|||||||
fmt.Printf("DHT min %d / avg %f / max %d\n", min, avg, max)
|
fmt.Printf("DHT min %d / avg %f / max %d\n", min, avg, max)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (n *Node) startUDP(listen string) {
|
func (n *Node) startTCP(listen string) {
|
||||||
n.core.DEBUG_setupAndStartGlobalUDPInterface(listen)
|
n.core.DEBUG_setupAndStartGlobalTCPInterface(listen)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (n *Node) connectUDP(remoteAddr string) {
|
func (n *Node) connectTCP(remoteAddr string) {
|
||||||
n.core.DEBUG_maybeSendUDPKeys(remoteAddr)
|
n.core.AddPeer(remoteAddr)
|
||||||
}
|
}
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
@ -440,8 +442,8 @@ func main() {
|
|||||||
if false {
|
if false {
|
||||||
// This connects the sim to the local network
|
// This connects the sim to the local network
|
||||||
for _, node := range kstore {
|
for _, node := range kstore {
|
||||||
node.startUDP("localhost:0")
|
node.startTCP("localhost:0")
|
||||||
node.connectUDP("localhost:12345")
|
node.connectTCP("localhost:12345")
|
||||||
break // just 1
|
break // just 1
|
||||||
}
|
}
|
||||||
for _, node := range kstore {
|
for _, node := range kstore {
|
||||||
|
@ -1,10 +1,17 @@
|
|||||||
package yggdrasil
|
package yggdrasil
|
||||||
|
|
||||||
type address [16]byte // IPv6 address within the network
|
// address represents an IPv6 address in the yggdrasil address range.
|
||||||
type subnet [8]byte // It's a /64
|
type address [16]byte
|
||||||
|
|
||||||
var address_prefix = [...]byte{0xfd} // For node addresses + local subnets
|
// subnet represents an IPv6 /64 subnet in the yggdrasil subnet range.
|
||||||
|
type subnet [8]byte
|
||||||
|
|
||||||
|
// address_prefix is the prefix used for all addresses and subnets in the network.
|
||||||
|
// The current implementation requires this to be a multiple of 8 bits.
|
||||||
|
// Nodes that configure this differently will be unable to communicate with eachother, though routing and the DHT machinery *should* still work.
|
||||||
|
var address_prefix = [...]byte{0xfd}
|
||||||
|
|
||||||
|
// isValid returns true if an address falls within the range used by nodes in the network.
|
||||||
func (a *address) isValid() bool {
|
func (a *address) isValid() bool {
|
||||||
for idx := range address_prefix {
|
for idx := range address_prefix {
|
||||||
if (*a)[idx] != address_prefix[idx] {
|
if (*a)[idx] != address_prefix[idx] {
|
||||||
@ -14,6 +21,7 @@ func (a *address) isValid() bool {
|
|||||||
return (*a)[len(address_prefix)]&0x80 == 0
|
return (*a)[len(address_prefix)]&0x80 == 0
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// isValid returns true if a prefix falls within the range usable by the network.
|
||||||
func (s *subnet) isValid() bool {
|
func (s *subnet) isValid() bool {
|
||||||
for idx := range address_prefix {
|
for idx := range address_prefix {
|
||||||
if (*s)[idx] != address_prefix[idx] {
|
if (*s)[idx] != address_prefix[idx] {
|
||||||
@ -23,6 +31,11 @@ func (s *subnet) isValid() bool {
|
|||||||
return (*s)[len(address_prefix)]&0x80 != 0
|
return (*s)[len(address_prefix)]&0x80 != 0
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// address_addrForNodeID takes a *NodeID as an argument and returns an *address.
|
||||||
|
// This address begins with the address prefix.
|
||||||
|
// The next bit is 0 for an address, and 1 for a subnet.
|
||||||
|
// The following 7 bits are set to the number of leading 1 bits in the NodeID.
|
||||||
|
// The NodeID, excluding the leading 1 bits and the first leading 1 bit, is truncated to the appropriate length and makes up the remainder of the address.
|
||||||
func address_addrForNodeID(nid *NodeID) *address {
|
func address_addrForNodeID(nid *NodeID) *address {
|
||||||
// 128 bit address
|
// 128 bit address
|
||||||
// Begins with prefix
|
// Begins with prefix
|
||||||
@ -59,6 +72,11 @@ func address_addrForNodeID(nid *NodeID) *address {
|
|||||||
return &addr
|
return &addr
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// address_subnetForNodeID takes a *NodeID as an argument and returns a *subnet.
|
||||||
|
// This subnet begins with the address prefix.
|
||||||
|
// The next bit is 0 for an address, and 1 for a subnet.
|
||||||
|
// The following 7 bits are set to the number of leading 1 bits in the NodeID.
|
||||||
|
// The NodeID, excluding the leading 1 bits and the first leading 1 bit, is truncated to the appropriate length and makes up the remainder of the subnet.
|
||||||
func address_subnetForNodeID(nid *NodeID) *subnet {
|
func address_subnetForNodeID(nid *NodeID) *subnet {
|
||||||
// Exactly as the address version, with two exceptions:
|
// Exactly as the address version, with two exceptions:
|
||||||
// 1) The first bit after the fixed prefix is a 1 instead of a 0
|
// 1) The first bit after the fixed prefix is a 1 instead of a 0
|
||||||
@ -70,6 +88,10 @@ func address_subnetForNodeID(nid *NodeID) *subnet {
|
|||||||
return &snet
|
return &snet
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getNodeIDandMask returns two *NodeID.
|
||||||
|
// The first is a NodeID with all the bits known from the address set to their correct values.
|
||||||
|
// The second is a bitmask with 1 bit set for each bit that was known from the address.
|
||||||
|
// This is used to look up NodeIDs in the DHT and tell if they match an address.
|
||||||
func (a *address) getNodeIDandMask() (*NodeID, *NodeID) {
|
func (a *address) getNodeIDandMask() (*NodeID, *NodeID) {
|
||||||
// Mask is a bitmask to mark the bits visible from the address
|
// Mask is a bitmask to mark the bits visible from the address
|
||||||
// This means truncated leading 1s, first leading 0, and visible part of addr
|
// This means truncated leading 1s, first leading 0, and visible part of addr
|
||||||
@ -95,6 +117,10 @@ func (a *address) getNodeIDandMask() (*NodeID, *NodeID) {
|
|||||||
return &nid, &mask
|
return &nid, &mask
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getNodeIDandMask returns two *NodeID.
|
||||||
|
// The first is a NodeID with all the bits known from the address set to their correct values.
|
||||||
|
// The second is a bitmask with 1 bit set for each bit that was known from the subnet.
|
||||||
|
// This is used to look up NodeIDs in the DHT and tell if they match a subnet.
|
||||||
func (s *subnet) getNodeIDandMask() (*NodeID, *NodeID) {
|
func (s *subnet) getNodeIDandMask() (*NodeID, *NodeID) {
|
||||||
// As with the address version, but visible parts of the subnet prefix instead
|
// As with the address version, but visible parts of the subnet prefix instead
|
||||||
var nid NodeID
|
var nid NodeID
|
||||||
|
@ -1,17 +1,19 @@
|
|||||||
package yggdrasil
|
package yggdrasil
|
||||||
|
|
||||||
import "net"
|
import (
|
||||||
import "os"
|
"encoding/hex"
|
||||||
import "encoding/hex"
|
"encoding/json"
|
||||||
import "encoding/json"
|
"errors"
|
||||||
import "errors"
|
"fmt"
|
||||||
import "fmt"
|
"net"
|
||||||
import "net/url"
|
"net/url"
|
||||||
import "sort"
|
"os"
|
||||||
import "strings"
|
"sort"
|
||||||
import "strconv"
|
"strconv"
|
||||||
import "sync/atomic"
|
"strings"
|
||||||
import "time"
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
// TODO: Add authentication
|
// TODO: Add authentication
|
||||||
|
|
||||||
@ -29,17 +31,21 @@ type admin_handlerInfo struct {
|
|||||||
handler func(admin_info) (admin_info, error) // First is input map, second is output
|
handler func(admin_info) (admin_info, error) // First is input map, second is output
|
||||||
}
|
}
|
||||||
|
|
||||||
// Maps things like "IP", "port", "bucket", or "coords" onto strings
|
// admin_pair maps things like "IP", "port", "bucket", or "coords" onto values.
|
||||||
type admin_pair struct {
|
type admin_pair struct {
|
||||||
key string
|
key string
|
||||||
val interface{}
|
val interface{}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// admin_nodeInfo represents the information we know about a node for an admin response.
|
||||||
type admin_nodeInfo []admin_pair
|
type admin_nodeInfo []admin_pair
|
||||||
|
|
||||||
|
// addHandler is called for each admin function to add the handler and help documentation to the API.
|
||||||
func (a *admin) addHandler(name string, args []string, handler func(admin_info) (admin_info, error)) {
|
func (a *admin) addHandler(name string, args []string, handler func(admin_info) (admin_info, error)) {
|
||||||
a.handlers = append(a.handlers, admin_handlerInfo{name, args, handler})
|
a.handlers = append(a.handlers, admin_handlerInfo{name, args, handler})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// init runs the initial admin setup.
|
||||||
func (a *admin) init(c *Core, listenaddr string) {
|
func (a *admin) init(c *Core, listenaddr string) {
|
||||||
a.core = c
|
a.core = c
|
||||||
a.listenaddr = listenaddr
|
a.listenaddr = listenaddr
|
||||||
@ -215,11 +221,13 @@ func (a *admin) init(c *Core, listenaddr string) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// start runs the admin API socket to listen for / respond to admin API calls.
|
||||||
func (a *admin) start() error {
|
func (a *admin) start() error {
|
||||||
go a.listen()
|
go a.listen()
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// listen is run by start and manages API connections.
|
||||||
func (a *admin) listen() {
|
func (a *admin) listen() {
|
||||||
l, err := net.Listen("tcp", a.listenaddr)
|
l, err := net.Listen("tcp", a.listenaddr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -236,6 +244,7 @@ func (a *admin) listen() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// handleRequest calls the request handler for each request sent to the admin API.
|
||||||
func (a *admin) handleRequest(conn net.Conn) {
|
func (a *admin) handleRequest(conn net.Conn) {
|
||||||
decoder := json.NewDecoder(conn)
|
decoder := json.NewDecoder(conn)
|
||||||
encoder := json.NewEncoder(conn)
|
encoder := json.NewEncoder(conn)
|
||||||
@ -317,7 +326,6 @@ func (a *admin) handleRequest(conn net.Conn) {
|
|||||||
|
|
||||||
// Send the response back
|
// Send the response back
|
||||||
if err := encoder.Encode(&send); err != nil {
|
if err := encoder.Encode(&send); err != nil {
|
||||||
// fmt.Println("Admin socket JSON encode error:", err)
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -328,6 +336,7 @@ func (a *admin) handleRequest(conn net.Conn) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// asMap converts an admin_nodeInfo into a map of key/value pairs.
|
||||||
func (n *admin_nodeInfo) asMap() map[string]interface{} {
|
func (n *admin_nodeInfo) asMap() map[string]interface{} {
|
||||||
m := make(map[string]interface{}, len(*n))
|
m := make(map[string]interface{}, len(*n))
|
||||||
for _, p := range *n {
|
for _, p := range *n {
|
||||||
@ -336,6 +345,7 @@ func (n *admin_nodeInfo) asMap() map[string]interface{} {
|
|||||||
return m
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// toString creates a printable string representation of an admin_nodeInfo.
|
||||||
func (n *admin_nodeInfo) toString() string {
|
func (n *admin_nodeInfo) toString() string {
|
||||||
// TODO return something nicer looking than this
|
// TODO return something nicer looking than this
|
||||||
var out []string
|
var out []string
|
||||||
@ -346,6 +356,7 @@ func (n *admin_nodeInfo) toString() string {
|
|||||||
return fmt.Sprint(*n)
|
return fmt.Sprint(*n)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// printInfos returns a newline separated list of strings from admin_nodeInfos, e.g. a printable string of info about all peers.
|
||||||
func (a *admin) printInfos(infos []admin_nodeInfo) string {
|
func (a *admin) printInfos(infos []admin_nodeInfo) string {
|
||||||
var out []string
|
var out []string
|
||||||
for _, info := range infos {
|
for _, info := range infos {
|
||||||
@ -355,14 +366,13 @@ func (a *admin) printInfos(infos []admin_nodeInfo) string {
|
|||||||
return strings.Join(out, "\n")
|
return strings.Join(out, "\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// addPeer triggers a connection attempt to a node.
|
||||||
func (a *admin) addPeer(addr string) error {
|
func (a *admin) addPeer(addr string) error {
|
||||||
u, err := url.Parse(addr)
|
u, err := url.Parse(addr)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
switch strings.ToLower(u.Scheme) {
|
switch strings.ToLower(u.Scheme) {
|
||||||
case "tcp":
|
case "tcp":
|
||||||
a.core.tcp.connect(u.Host)
|
a.core.tcp.connect(u.Host)
|
||||||
case "udp":
|
|
||||||
a.core.udp.connect(u.Host)
|
|
||||||
case "socks":
|
case "socks":
|
||||||
a.core.tcp.connectSOCKS(u.Host, u.Path[1:])
|
a.core.tcp.connectSOCKS(u.Host, u.Path[1:])
|
||||||
default:
|
default:
|
||||||
@ -371,21 +381,16 @@ func (a *admin) addPeer(addr string) error {
|
|||||||
} else {
|
} else {
|
||||||
// no url scheme provided
|
// no url scheme provided
|
||||||
addr = strings.ToLower(addr)
|
addr = strings.ToLower(addr)
|
||||||
if strings.HasPrefix(addr, "udp:") {
|
if strings.HasPrefix(addr, "tcp:") {
|
||||||
a.core.udp.connect(addr[4:])
|
addr = addr[4:]
|
||||||
return nil
|
|
||||||
} else {
|
|
||||||
if strings.HasPrefix(addr, "tcp:") {
|
|
||||||
addr = addr[4:]
|
|
||||||
}
|
|
||||||
a.core.tcp.connect(addr)
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
return errors.New("invalid peer: " + addr)
|
a.core.tcp.connect(addr)
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// removePeer disconnects an existing node (given by the node's port number).
|
||||||
func (a *admin) removePeer(p string) error {
|
func (a *admin) removePeer(p string) error {
|
||||||
iport, err := strconv.Atoi(p)
|
iport, err := strconv.Atoi(p)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -395,6 +400,7 @@ func (a *admin) removePeer(p string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// startTunWithMTU creates the tun/tap device, sets its address, and sets the MTU to the provided value.
|
||||||
func (a *admin) startTunWithMTU(ifname string, iftapmode bool, ifmtu int) error {
|
func (a *admin) startTunWithMTU(ifname string, iftapmode bool, ifmtu int) error {
|
||||||
// Close the TUN first if open
|
// Close the TUN first if open
|
||||||
_ = a.core.tun.close()
|
_ = a.core.tun.close()
|
||||||
@ -423,6 +429,7 @@ func (a *admin) startTunWithMTU(ifname string, iftapmode bool, ifmtu int) error
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getData_getSelf returns the self node's info for admin responses.
|
||||||
func (a *admin) getData_getSelf() *admin_nodeInfo {
|
func (a *admin) getData_getSelf() *admin_nodeInfo {
|
||||||
table := a.core.switchTable.table.Load().(lookupTable)
|
table := a.core.switchTable.table.Load().(lookupTable)
|
||||||
coords := table.self.getCoords()
|
coords := table.self.getCoords()
|
||||||
@ -434,6 +441,7 @@ func (a *admin) getData_getSelf() *admin_nodeInfo {
|
|||||||
return &self
|
return &self
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getData_getPeers returns info from Core.peers for an admin response.
|
||||||
func (a *admin) getData_getPeers() []admin_nodeInfo {
|
func (a *admin) getData_getPeers() []admin_nodeInfo {
|
||||||
ports := a.core.peers.ports.Load().(map[switchPort]*peer)
|
ports := a.core.peers.ports.Load().(map[switchPort]*peer)
|
||||||
var peerInfos []admin_nodeInfo
|
var peerInfos []admin_nodeInfo
|
||||||
@ -457,6 +465,7 @@ func (a *admin) getData_getPeers() []admin_nodeInfo {
|
|||||||
return peerInfos
|
return peerInfos
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getData_getSwitchPeers returns info from Core.switchTable for an admin response.
|
||||||
func (a *admin) getData_getSwitchPeers() []admin_nodeInfo {
|
func (a *admin) getData_getSwitchPeers() []admin_nodeInfo {
|
||||||
var peerInfos []admin_nodeInfo
|
var peerInfos []admin_nodeInfo
|
||||||
table := a.core.switchTable.table.Load().(lookupTable)
|
table := a.core.switchTable.table.Load().(lookupTable)
|
||||||
@ -478,6 +487,7 @@ func (a *admin) getData_getSwitchPeers() []admin_nodeInfo {
|
|||||||
return peerInfos
|
return peerInfos
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getData_getDHT returns info from Core.dht for an admin response.
|
||||||
func (a *admin) getData_getDHT() []admin_nodeInfo {
|
func (a *admin) getData_getDHT() []admin_nodeInfo {
|
||||||
var infos []admin_nodeInfo
|
var infos []admin_nodeInfo
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
@ -505,6 +515,7 @@ func (a *admin) getData_getDHT() []admin_nodeInfo {
|
|||||||
return infos
|
return infos
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getData_getSessions returns info from Core.sessions for an admin response.
|
||||||
func (a *admin) getData_getSessions() []admin_nodeInfo {
|
func (a *admin) getData_getSessions() []admin_nodeInfo {
|
||||||
var infos []admin_nodeInfo
|
var infos []admin_nodeInfo
|
||||||
getSessions := func() {
|
getSessions := func() {
|
||||||
@ -525,6 +536,7 @@ func (a *admin) getData_getSessions() []admin_nodeInfo {
|
|||||||
return infos
|
return infos
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getAllowedEncryptionPublicKeys returns the public keys permitted for incoming peer connections.
|
||||||
func (a *admin) getAllowedEncryptionPublicKeys() []string {
|
func (a *admin) getAllowedEncryptionPublicKeys() []string {
|
||||||
pubs := a.core.peers.getAllowedEncryptionPublicKeys()
|
pubs := a.core.peers.getAllowedEncryptionPublicKeys()
|
||||||
var out []string
|
var out []string
|
||||||
@ -534,6 +546,7 @@ func (a *admin) getAllowedEncryptionPublicKeys() []string {
|
|||||||
return out
|
return out
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// addAllowedEncryptionPublicKey whitelists a key for incoming peer connections.
|
||||||
func (a *admin) addAllowedEncryptionPublicKey(bstr string) (err error) {
|
func (a *admin) addAllowedEncryptionPublicKey(bstr string) (err error) {
|
||||||
boxBytes, err := hex.DecodeString(bstr)
|
boxBytes, err := hex.DecodeString(bstr)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
@ -544,6 +557,8 @@ func (a *admin) addAllowedEncryptionPublicKey(bstr string) (err error) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// removeAllowedEncryptionPublicKey removes a key from the whitelist for incoming peer connections.
|
||||||
|
// If none are set, an empty list permits all incoming connections.
|
||||||
func (a *admin) removeAllowedEncryptionPublicKey(bstr string) (err error) {
|
func (a *admin) removeAllowedEncryptionPublicKey(bstr string) (err error) {
|
||||||
boxBytes, err := hex.DecodeString(bstr)
|
boxBytes, err := hex.DecodeString(bstr)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
@ -554,6 +569,9 @@ func (a *admin) removeAllowedEncryptionPublicKey(bstr string) (err error) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getResponse_dot returns a response for a graphviz dot formatted representation of the known parts of the network.
|
||||||
|
// This is color-coded and labeled, and includes the self node, switch peers, nodes known to the DHT, and nodes with open sessions.
|
||||||
|
// The graph is structured as a tree with directed links leading away from the root.
|
||||||
func (a *admin) getResponse_dot() []byte {
|
func (a *admin) getResponse_dot() []byte {
|
||||||
self := a.getData_getSelf()
|
self := a.getData_getSelf()
|
||||||
peers := a.getData_getSwitchPeers()
|
peers := a.getData_getSwitchPeers()
|
||||||
@ -623,7 +641,7 @@ func (a *admin) getResponse_dot() []byte {
|
|||||||
for _, info := range infos {
|
for _, info := range infos {
|
||||||
keys = append(keys, info.key)
|
keys = append(keys, info.key)
|
||||||
}
|
}
|
||||||
// TODO sort
|
// sort
|
||||||
less := func(i, j int) bool {
|
less := func(i, j int) bool {
|
||||||
return keys[i] < keys[j]
|
return keys[i] < keys[j]
|
||||||
}
|
}
|
||||||
|
@ -2,19 +2,19 @@ package config
|
|||||||
|
|
||||||
// NodeConfig defines all configuration values needed to run a signle yggdrasil node
|
// NodeConfig defines all configuration values needed to run a signle yggdrasil node
|
||||||
type NodeConfig struct {
|
type NodeConfig struct {
|
||||||
Listen string `comment:"Listen address for peer connections. Default is to listen for all\nUDP and TCP connections over IPv4 and IPv6."`
|
Listen string `comment:"Listen address for peer connections. Default is to listen for all\nTCP connections over IPv4 and IPv6 with a random port."`
|
||||||
AdminListen string `comment:"Listen address for admin connections Default is to listen for local\nconnections only on TCP port 9001."`
|
AdminListen string `comment:"Listen address for admin connections Default is to listen for local\nconnections only on TCP port 9001."`
|
||||||
Peers []string `comment:"List of connection strings for static peers in URI format, i.e.\ntcp://a.b.c.d:e, udp://a.b.c.d:e, socks://a.b.c.d:e/f.g.h.i:j etc."`
|
Peers []string `comment:"List of connection strings for static peers in URI format, i.e.\ntcp://a.b.c.d:e or socks://a.b.c.d:e/f.g.h.i:j"`
|
||||||
AllowedEncryptionPublicKeys []string `comment:"List of peer encryption public keys to allow incoming/outgoing UDP\nor incoming TCP connections from. If left empty/undefined then all\nconnections will be allowed by default."`
|
AllowedEncryptionPublicKeys []string `comment:"List of peer encryption public keys to allow or incoming TCP\nconnections from. If left empty/undefined then all connections\nwill be allowed by default."`
|
||||||
EncryptionPublicKey string `comment:"Your public encryption key. Your peers may ask you for this to put\ninto their AllowedEncryptionPublicKeys configuration."`
|
EncryptionPublicKey string `comment:"Your public encryption key. Your peers may ask you for this to put\ninto their AllowedEncryptionPublicKeys configuration."`
|
||||||
EncryptionPrivateKey string `comment:"Your private encryption key. DO NOT share this with anyone!"`
|
EncryptionPrivateKey string `comment:"Your private encryption key. DO NOT share this with anyone!"`
|
||||||
SigningPublicKey string `comment:"Your public signing key. You should not ordinarily need to share\nthis with anyone."`
|
SigningPublicKey string `comment:"Your public signing key. You should not ordinarily need to share\nthis with anyone."`
|
||||||
SigningPrivateKey string `comment:"Your private signing key. DO NOT share this with anyone!"`
|
SigningPrivateKey string `comment:"Your private signing key. DO NOT share this with anyone!"`
|
||||||
MulticastInterfaces []string `comment:"Regular expressions for which interfaces multicast peer discovery\nshould be enabled on. If none specified, multicast peer discovery is\ndisabled. The default value is .* which uses all interfaces."`
|
MulticastInterfaces []string `comment:"Regular expressions for which interfaces multicast peer discovery\nshould be enabled on. If none specified, multicast peer discovery is\ndisabled. The default value is .* which uses all interfaces."`
|
||||||
IfName string `comment:"Local network interface name for TUN/TAP adapter, or \"auto\" to select\nan interface automatically, or \"none\" to run without TUN/TAP."`
|
IfName string `comment:"Local network interface name for TUN/TAP adapter, or \"auto\" to select\nan interface automatically, or \"none\" to run without TUN/TAP."`
|
||||||
IfTAPMode bool `comment:"Set local network interface to TAP mode rather than TUN mode if\nsupported by your platform - option will be ignored if not."`
|
IfTAPMode bool `comment:"Set local network interface to TAP mode rather than TUN mode if\nsupported by your platform - option will be ignored if not."`
|
||||||
IfMTU int `comment:"Maximux Transmission Unit (MTU) size for your local TUN/TAP interface.\nDefault is the largest supported size for your platform. The lowest\npossible value is 1280."`
|
IfMTU int `comment:"Maximux Transmission Unit (MTU) size for your local TUN/TAP interface.\nDefault is the largest supported size for your platform. The lowest\npossible value is 1280."`
|
||||||
Net NetConfig `comment:"Extended options for connecting to peers over other networks."`
|
//Net NetConfig `comment:"Extended options for connecting to peers over other networks."`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NetConfig defines network/proxy related configuration values
|
// NetConfig defines network/proxy related configuration values
|
||||||
|
@ -1,12 +1,15 @@
|
|||||||
package yggdrasil
|
package yggdrasil
|
||||||
|
|
||||||
import "io/ioutil"
|
import (
|
||||||
import "log"
|
"encoding/hex"
|
||||||
import "regexp"
|
"fmt"
|
||||||
import "net"
|
"io/ioutil"
|
||||||
import "fmt"
|
"log"
|
||||||
import "encoding/hex"
|
"net"
|
||||||
import "yggdrasil/config"
|
"regexp"
|
||||||
|
|
||||||
|
"yggdrasil/config"
|
||||||
|
)
|
||||||
|
|
||||||
// The Core object represents the Yggdrasil node. You should create a Core
|
// The Core object represents the Yggdrasil node. You should create a Core
|
||||||
// object for each Yggdrasil node you plan to run.
|
// object for each Yggdrasil node you plan to run.
|
||||||
@ -27,7 +30,6 @@ type Core struct {
|
|||||||
searches searches
|
searches searches
|
||||||
multicast multicast
|
multicast multicast
|
||||||
tcp tcpInterface
|
tcp tcpInterface
|
||||||
udp udpInterface
|
|
||||||
log *log.Logger
|
log *log.Logger
|
||||||
ifceExpr []*regexp.Regexp // the zone of link-local IPv6 peers must match this
|
ifceExpr []*regexp.Regexp // the zone of link-local IPv6 peers must match this
|
||||||
}
|
}
|
||||||
@ -99,21 +101,11 @@ func (c *Core) Start(nc *config.NodeConfig, log *log.Logger) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := c.udp.init(c, nc.Listen); err != nil {
|
|
||||||
c.log.Println("Failed to start UDP interface")
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := c.router.start(); err != nil {
|
if err := c.router.start(); err != nil {
|
||||||
c.log.Println("Failed to start router")
|
c.log.Println("Failed to start router")
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := c.switchTable.start(); err != nil {
|
|
||||||
c.log.Println("Failed to start switch table ticker")
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := c.admin.start(); err != nil {
|
if err := c.admin.start(); err != nil {
|
||||||
c.log.Println("Failed to start admin socket")
|
c.log.Println("Failed to start admin socket")
|
||||||
return err
|
return err
|
||||||
|
@ -10,10 +10,13 @@ It also defines NodeID and TreeID as hashes of keys, and wraps hash functions
|
|||||||
|
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import "crypto/rand"
|
import (
|
||||||
import "crypto/sha512"
|
"crypto/rand"
|
||||||
import "golang.org/x/crypto/ed25519"
|
"crypto/sha512"
|
||||||
import "golang.org/x/crypto/nacl/box"
|
|
||||||
|
"golang.org/x/crypto/ed25519"
|
||||||
|
"golang.org/x/crypto/nacl/box"
|
||||||
|
)
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
@ -121,7 +124,6 @@ func boxOpen(shared *boxSharedKey,
|
|||||||
boxed []byte,
|
boxed []byte,
|
||||||
nonce *boxNonce) ([]byte, bool) {
|
nonce *boxNonce) ([]byte, bool) {
|
||||||
out := util_getBytes()
|
out := util_getBytes()
|
||||||
//return append(out, boxed...), true // XXX HACK to test without encryption
|
|
||||||
s := (*[boxSharedKeyLen]byte)(shared)
|
s := (*[boxSharedKeyLen]byte)(shared)
|
||||||
n := (*[boxNonceLen]byte)(nonce)
|
n := (*[boxNonceLen]byte)(nonce)
|
||||||
unboxed, success := box.OpenAfterPrecomputation(out, boxed, n, s)
|
unboxed, success := box.OpenAfterPrecomputation(out, boxed, n, s)
|
||||||
@ -134,7 +136,6 @@ func boxSeal(shared *boxSharedKey, unboxed []byte, nonce *boxNonce) ([]byte, *bo
|
|||||||
}
|
}
|
||||||
nonce.update()
|
nonce.update()
|
||||||
out := util_getBytes()
|
out := util_getBytes()
|
||||||
//return append(out, unboxed...), nonce // XXX HACK to test without encryption
|
|
||||||
s := (*[boxSharedKeyLen]byte)(shared)
|
s := (*[boxSharedKeyLen]byte)(shared)
|
||||||
n := (*[boxNonceLen]byte)(nonce)
|
n := (*[boxNonceLen]byte)(nonce)
|
||||||
boxed := box.SealAfterPrecomputation(out, unboxed, n, s)
|
boxed := box.SealAfterPrecomputation(out, unboxed, n, s)
|
||||||
|
@ -36,7 +36,6 @@ func (c *Core) Init() {
|
|||||||
spub, spriv := newSigKeys()
|
spub, spriv := newSigKeys()
|
||||||
c.init(bpub, bpriv, spub, spriv)
|
c.init(bpub, bpriv, spub, spriv)
|
||||||
c.router.start()
|
c.router.start()
|
||||||
c.switchTable.start()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
@ -65,11 +64,10 @@ func (c *Core) DEBUG_getPeers() *peers {
|
|||||||
return &c.peers
|
return &c.peers
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ps *peers) DEBUG_newPeer(box boxPubKey,
|
func (ps *peers) DEBUG_newPeer(box boxPubKey, sig sigPubKey, link boxSharedKey) *peer {
|
||||||
sig sigPubKey) *peer {
|
|
||||||
//in <-chan []byte,
|
//in <-chan []byte,
|
||||||
//out chan<- []byte) *peer {
|
//out chan<- []byte) *peer {
|
||||||
return ps.newPeer(&box, &sig) //, in, out)
|
return ps.newPeer(&box, &sig, &link) //, in, out)
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -127,8 +125,8 @@ func (l *switchLocator) DEBUG_getCoords() []byte {
|
|||||||
return l.getCoords()
|
return l.getCoords()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Core) DEBUG_switchLookup(dest []byte, ttl uint64) (switchPort, uint64) {
|
func (c *Core) DEBUG_switchLookup(dest []byte) switchPort {
|
||||||
return c.switchTable.lookup(dest, ttl)
|
return c.switchTable.lookup(dest)
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -276,6 +274,10 @@ func (c *Core) DEBUG_newBoxKeys() (*boxPubKey, *boxPrivKey) {
|
|||||||
return newBoxKeys()
|
return newBoxKeys()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (c *Core) DEBUG_getSharedKey(myPrivKey *boxPrivKey, othersPubKey *boxPubKey) *boxSharedKey {
|
||||||
|
return getSharedKey(myPrivKey, othersPubKey)
|
||||||
|
}
|
||||||
|
|
||||||
func (c *Core) DEBUG_newSigKeys() (*sigPubKey, *sigPrivKey) {
|
func (c *Core) DEBUG_newSigKeys() (*sigPubKey, *sigPrivKey) {
|
||||||
return newSigKeys()
|
return newSigKeys()
|
||||||
}
|
}
|
||||||
@ -310,13 +312,11 @@ func (c *Core) DEBUG_init(bpub []byte,
|
|||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := c.switchTable.start(); err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
|
/*
|
||||||
func (c *Core) DEBUG_setupAndStartGlobalUDPInterface(addrport string) {
|
func (c *Core) DEBUG_setupAndStartGlobalUDPInterface(addrport string) {
|
||||||
if err := c.udp.init(c, addrport); err != nil {
|
if err := c.udp.init(c, addrport); err != nil {
|
||||||
c.log.Println("Failed to start UDP interface:", err)
|
c.log.Println("Failed to start UDP interface:", err)
|
||||||
@ -342,6 +342,7 @@ func (c *Core) DEBUG_maybeSendUDPKeys(saddr string) {
|
|||||||
c.udp.sendKeys(addr)
|
c.udp.sendKeys(addr)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
@ -451,16 +452,25 @@ func (c *Core) DEBUG_addAllowedEncryptionPublicKey(boxStr string) {
|
|||||||
|
|
||||||
func DEBUG_simLinkPeers(p, q *peer) {
|
func DEBUG_simLinkPeers(p, q *peer) {
|
||||||
// Sets q.out() to point to p and starts p.linkLoop()
|
// Sets q.out() to point to p and starts p.linkLoop()
|
||||||
plinkIn := make(chan []byte, 1)
|
p.linkOut, q.linkOut = make(chan []byte, 1), make(chan []byte, 1)
|
||||||
qlinkIn := make(chan []byte, 1)
|
go func() {
|
||||||
|
for bs := range p.linkOut {
|
||||||
|
q.handlePacket(bs)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
go func() {
|
||||||
|
for bs := range q.linkOut {
|
||||||
|
p.handlePacket(bs)
|
||||||
|
}
|
||||||
|
}()
|
||||||
p.out = func(bs []byte) {
|
p.out = func(bs []byte) {
|
||||||
go q.handlePacket(bs, qlinkIn)
|
go q.handlePacket(bs)
|
||||||
}
|
}
|
||||||
q.out = func(bs []byte) {
|
q.out = func(bs []byte) {
|
||||||
go p.handlePacket(bs, plinkIn)
|
go p.handlePacket(bs)
|
||||||
}
|
}
|
||||||
go p.linkLoop(plinkIn)
|
go p.linkLoop()
|
||||||
go q.linkLoop(qlinkIn)
|
go q.linkLoop()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Core) DEBUG_simFixMTU() {
|
func (c *Core) DEBUG_simFixMTU() {
|
||||||
|
@ -18,26 +18,37 @@ Slight changes *do* make it blackhole hard, bootstrapping isn't an easy problem
|
|||||||
|
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import "sort"
|
import (
|
||||||
import "time"
|
"sort"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
//import "fmt"
|
// Number of DHT buckets, equal to the number of bits in a NodeID.
|
||||||
|
// Note that, in practice, nearly all of these will be empty.
|
||||||
|
const dht_bucket_number = 8 * NodeIDLen
|
||||||
|
|
||||||
// Maximum size for buckets and lookups
|
// Number of nodes to keep in each DHT bucket.
|
||||||
// Exception for buckets if the next one is non-full
|
// Additional entries may be kept for peers, for bootstrapping reasons, if they don't already have an entry in the bucket.
|
||||||
const dht_bucket_number = 8 * NodeIDLen // This shouldn't be changed
|
const dht_bucket_size = 2
|
||||||
const dht_bucket_size = 2 // This should be at least 2
|
|
||||||
const dht_lookup_size = 16 // This should be at least 1, below 2 is impractical
|
|
||||||
|
|
||||||
|
// Number of responses to include in a lookup.
|
||||||
|
// If extras are given, they will be truncated from the response handler to prevent abuse.
|
||||||
|
const dht_lookup_size = 16
|
||||||
|
|
||||||
|
// dhtInfo represents everything we know about a node in the DHT.
|
||||||
|
// This includes its key, a cache of it's NodeID, coords, and timing/ping related info for deciding who/when to ping nodes for maintenance.
|
||||||
type dhtInfo struct {
|
type dhtInfo struct {
|
||||||
nodeID_hidden *NodeID
|
nodeID_hidden *NodeID
|
||||||
key boxPubKey
|
key boxPubKey
|
||||||
coords []byte
|
coords []byte
|
||||||
send time.Time // When we last sent a message
|
send time.Time // When we last sent a message
|
||||||
recv time.Time // When we last received a message
|
recv time.Time // When we last received a message
|
||||||
pings int // Decide when to drop
|
pings int // Decide when to drop
|
||||||
|
throttle time.Duration // Time to wait before pinging a node to bootstrap buckets, increases exponentially from 1 second to 1 minute
|
||||||
|
bootstrapSend time.Time // The time checked/updated as part of throttle checks
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns the *NodeID associated with dhtInfo.key, calculating it on the fly the first time or from a cache all subsequent times.
|
||||||
func (info *dhtInfo) getNodeID() *NodeID {
|
func (info *dhtInfo) getNodeID() *NodeID {
|
||||||
if info.nodeID_hidden == nil {
|
if info.nodeID_hidden == nil {
|
||||||
info.nodeID_hidden = getNodeID(&info.key)
|
info.nodeID_hidden = getNodeID(&info.key)
|
||||||
@ -45,17 +56,23 @@ func (info *dhtInfo) getNodeID() *NodeID {
|
|||||||
return info.nodeID_hidden
|
return info.nodeID_hidden
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// The nodes we known in a bucket (a region of keyspace with a matching prefix of some length).
|
||||||
type bucket struct {
|
type bucket struct {
|
||||||
peers []*dhtInfo
|
peers []*dhtInfo
|
||||||
other []*dhtInfo
|
other []*dhtInfo
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Request for a node to do a lookup.
|
||||||
|
// Includes our key and coords so they can send a response back, and the destination NodeID we want to ask about.
|
||||||
type dhtReq struct {
|
type dhtReq struct {
|
||||||
Key boxPubKey // Key of whoever asked
|
Key boxPubKey // Key of whoever asked
|
||||||
Coords []byte // Coords of whoever asked
|
Coords []byte // Coords of whoever asked
|
||||||
Dest NodeID // NodeID they're asking about
|
Dest NodeID // NodeID they're asking about
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Response to a DHT lookup.
|
||||||
|
// Includes the key and coords of the node that's responding, and the destination they were asked about.
|
||||||
|
// The main part is Infos []*dhtInfo, the lookup response.
|
||||||
type dhtRes struct {
|
type dhtRes struct {
|
||||||
Key boxPubKey // key to respond to
|
Key boxPubKey // key to respond to
|
||||||
Coords []byte // coords to respond to
|
Coords []byte // coords to respond to
|
||||||
@ -63,11 +80,16 @@ type dhtRes struct {
|
|||||||
Infos []*dhtInfo // response
|
Infos []*dhtInfo // response
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Information about a node, either taken from our table or from a lookup response.
|
||||||
|
// Used to schedule pings at a later time (they're throttled to 1/second for background maintenance traffic).
|
||||||
type dht_rumor struct {
|
type dht_rumor struct {
|
||||||
info *dhtInfo
|
info *dhtInfo
|
||||||
target *NodeID
|
target *NodeID
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// The main DHT struct.
|
||||||
|
// Includes a slice of buckets, to organize known nodes based on their region of keyspace.
|
||||||
|
// Also includes information about outstanding DHT requests and the rumor mill of nodes to ping at some point.
|
||||||
type dht struct {
|
type dht struct {
|
||||||
core *Core
|
core *Core
|
||||||
nodeID NodeID
|
nodeID NodeID
|
||||||
@ -78,13 +100,16 @@ type dht struct {
|
|||||||
rumorMill []dht_rumor
|
rumorMill []dht_rumor
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Initializes the DHT.
|
||||||
func (t *dht) init(c *Core) {
|
func (t *dht) init(c *Core) {
|
||||||
t.core = c
|
t.core = c
|
||||||
t.nodeID = *t.core.GetNodeID()
|
t.nodeID = *t.core.GetNodeID()
|
||||||
t.peers = make(chan *dhtInfo, 1)
|
t.peers = make(chan *dhtInfo, 1024)
|
||||||
t.reqs = make(map[boxPubKey]map[NodeID]time.Time)
|
t.reqs = make(map[boxPubKey]map[NodeID]time.Time)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Reads a request, performs a lookup, and responds.
|
||||||
|
// If the node that sent the request isn't in our DHT, but should be, then we add them.
|
||||||
func (t *dht) handleReq(req *dhtReq) {
|
func (t *dht) handleReq(req *dhtReq) {
|
||||||
// Send them what they asked for
|
// Send them what they asked for
|
||||||
loc := t.core.switchTable.getLocator()
|
loc := t.core.switchTable.getLocator()
|
||||||
@ -105,6 +130,8 @@ func (t *dht) handleReq(req *dhtReq) {
|
|||||||
//if req.dest != t.nodeID { t.ping(&info, info.getNodeID()) } // Or spam...
|
//if req.dest != t.nodeID { t.ping(&info, info.getNodeID()) } // Or spam...
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Reads a lookup response, checks that we had sent a matching request, and processes the response info.
|
||||||
|
// This mainly consists of updating the node we asked in our DHT (they responded, so we know they're still alive), and adding the response info to the rumor mill.
|
||||||
func (t *dht) handleRes(res *dhtRes) {
|
func (t *dht) handleRes(res *dhtRes) {
|
||||||
t.core.searches.handleDHTRes(res)
|
t.core.searches.handleDHTRes(res)
|
||||||
reqs, isIn := t.reqs[res.Key]
|
reqs, isIn := t.reqs[res.Key]
|
||||||
@ -115,11 +142,14 @@ func (t *dht) handleRes(res *dhtRes) {
|
|||||||
if !isIn {
|
if !isIn {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
now := time.Now()
|
||||||
rinfo := dhtInfo{
|
rinfo := dhtInfo{
|
||||||
key: res.Key,
|
key: res.Key,
|
||||||
coords: res.Coords,
|
coords: res.Coords,
|
||||||
send: time.Now(), // Technically wrong but should be OK...
|
send: now, // Technically wrong but should be OK...
|
||||||
recv: time.Now(),
|
recv: now,
|
||||||
|
throttle: time.Second,
|
||||||
|
bootstrapSend: now,
|
||||||
}
|
}
|
||||||
// If they're already in the table, then keep the correct send time
|
// If they're already in the table, then keep the correct send time
|
||||||
bidx, isOK := t.getBucketIndex(rinfo.getNodeID())
|
bidx, isOK := t.getBucketIndex(rinfo.getNodeID())
|
||||||
@ -130,11 +160,15 @@ func (t *dht) handleRes(res *dhtRes) {
|
|||||||
for _, oldinfo := range b.peers {
|
for _, oldinfo := range b.peers {
|
||||||
if oldinfo.key == rinfo.key {
|
if oldinfo.key == rinfo.key {
|
||||||
rinfo.send = oldinfo.send
|
rinfo.send = oldinfo.send
|
||||||
|
rinfo.throttle = oldinfo.throttle
|
||||||
|
rinfo.bootstrapSend = oldinfo.bootstrapSend
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
for _, oldinfo := range b.other {
|
for _, oldinfo := range b.other {
|
||||||
if oldinfo.key == rinfo.key {
|
if oldinfo.key == rinfo.key {
|
||||||
rinfo.send = oldinfo.send
|
rinfo.send = oldinfo.send
|
||||||
|
rinfo.throttle = oldinfo.throttle
|
||||||
|
rinfo.bootstrapSend = oldinfo.bootstrapSend
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// Insert into table
|
// Insert into table
|
||||||
@ -153,6 +187,7 @@ func (t *dht) handleRes(res *dhtRes) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Does a DHT lookup and returns the results, sorted in ascending order of distance from the destination.
|
||||||
func (t *dht) lookup(nodeID *NodeID, allowCloser bool) []*dhtInfo {
|
func (t *dht) lookup(nodeID *NodeID, allowCloser bool) []*dhtInfo {
|
||||||
// FIXME this allocates a bunch, sorts, and keeps the part it likes
|
// FIXME this allocates a bunch, sorts, and keeps the part it likes
|
||||||
// It would be better to only track the part it likes to begin with
|
// It would be better to only track the part it likes to begin with
|
||||||
@ -188,16 +223,19 @@ func (t *dht) lookup(nodeID *NodeID, allowCloser bool) []*dhtInfo {
|
|||||||
return res
|
return res
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets the bucket for a specified matching prefix length.
|
||||||
func (t *dht) getBucket(bidx int) *bucket {
|
func (t *dht) getBucket(bidx int) *bucket {
|
||||||
return &t.buckets_hidden[bidx]
|
return &t.buckets_hidden[bidx]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Lists the number of buckets.
|
||||||
func (t *dht) nBuckets() int {
|
func (t *dht) nBuckets() int {
|
||||||
return len(t.buckets_hidden)
|
return len(t.buckets_hidden)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Inserts a node into the DHT if they meet certain requirements.
|
||||||
|
// In particular, they must either be a peer that's not already in the DHT, or else be someone we should insert into the DHT (see: shouldInsert).
|
||||||
func (t *dht) insertIfNew(info *dhtInfo, isPeer bool) {
|
func (t *dht) insertIfNew(info *dhtInfo, isPeer bool) {
|
||||||
//fmt.Println("DEBUG: dht insertIfNew:", info.getNodeID(), info.coords)
|
|
||||||
// Insert if no "other" entry already exists
|
// Insert if no "other" entry already exists
|
||||||
nodeID := info.getNodeID()
|
nodeID := info.getNodeID()
|
||||||
bidx, isOK := t.getBucketIndex(nodeID)
|
bidx, isOK := t.getBucketIndex(nodeID)
|
||||||
@ -215,8 +253,8 @@ func (t *dht) insertIfNew(info *dhtInfo, isPeer bool) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Adds a node to the DHT, possibly removing another node in the process.
|
||||||
func (t *dht) insert(info *dhtInfo, isPeer bool) {
|
func (t *dht) insert(info *dhtInfo, isPeer bool) {
|
||||||
//fmt.Println("DEBUG: dht insert:", info.getNodeID(), info.coords)
|
|
||||||
// First update the time on this info
|
// First update the time on this info
|
||||||
info.recv = time.Now()
|
info.recv = time.Now()
|
||||||
// Get the bucket for this node
|
// Get the bucket for this node
|
||||||
@ -231,6 +269,9 @@ func (t *dht) insert(info *dhtInfo, isPeer bool) {
|
|||||||
// This speeds up bootstrapping
|
// This speeds up bootstrapping
|
||||||
info.recv = info.recv.Add(-time.Hour)
|
info.recv = info.recv.Add(-time.Hour)
|
||||||
}
|
}
|
||||||
|
if isPeer || info.throttle > time.Minute {
|
||||||
|
info.throttle = time.Minute
|
||||||
|
}
|
||||||
// First drop any existing entry from the bucket
|
// First drop any existing entry from the bucket
|
||||||
b.drop(&info.key)
|
b.drop(&info.key)
|
||||||
// Now add to the *end* of the bucket
|
// Now add to the *end* of the bucket
|
||||||
@ -246,6 +287,7 @@ func (t *dht) insert(info *dhtInfo, isPeer bool) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets the bucket index for the bucket where we would put the given NodeID.
|
||||||
func (t *dht) getBucketIndex(nodeID *NodeID) (int, bool) {
|
func (t *dht) getBucketIndex(nodeID *NodeID) (int, bool) {
|
||||||
for bidx := 0; bidx < t.nBuckets(); bidx++ {
|
for bidx := 0; bidx < t.nBuckets(); bidx++ {
|
||||||
them := nodeID[bidx/8] & (0x80 >> byte(bidx%8))
|
them := nodeID[bidx/8] & (0x80 >> byte(bidx%8))
|
||||||
@ -257,6 +299,8 @@ func (t *dht) getBucketIndex(nodeID *NodeID) (int, bool) {
|
|||||||
return t.nBuckets(), false
|
return t.nBuckets(), false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Helper called by containsPeer, containsOther, and contains.
|
||||||
|
// Returns true if a node with the same ID *and coords* is already in the given part of the bucket.
|
||||||
func dht_bucket_check(newInfo *dhtInfo, infos []*dhtInfo) bool {
|
func dht_bucket_check(newInfo *dhtInfo, infos []*dhtInfo) bool {
|
||||||
// Compares if key and coords match
|
// Compares if key and coords match
|
||||||
if newInfo == nil {
|
if newInfo == nil {
|
||||||
@ -286,18 +330,22 @@ func dht_bucket_check(newInfo *dhtInfo, infos []*dhtInfo) bool {
|
|||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Calls bucket_check over the bucket's peers infos.
|
||||||
func (b *bucket) containsPeer(info *dhtInfo) bool {
|
func (b *bucket) containsPeer(info *dhtInfo) bool {
|
||||||
return dht_bucket_check(info, b.peers)
|
return dht_bucket_check(info, b.peers)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Calls bucket_check over the bucket's other info.
|
||||||
func (b *bucket) containsOther(info *dhtInfo) bool {
|
func (b *bucket) containsOther(info *dhtInfo) bool {
|
||||||
return dht_bucket_check(info, b.other)
|
return dht_bucket_check(info, b.other)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// returns containsPeer || containsOther
|
||||||
func (b *bucket) contains(info *dhtInfo) bool {
|
func (b *bucket) contains(info *dhtInfo) bool {
|
||||||
return b.containsPeer(info) || b.containsOther(info)
|
return b.containsPeer(info) || b.containsOther(info)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Removes a node with the corresponding key, if any, from a bucket.
|
||||||
func (b *bucket) drop(key *boxPubKey) {
|
func (b *bucket) drop(key *boxPubKey) {
|
||||||
clean := func(infos []*dhtInfo) []*dhtInfo {
|
clean := func(infos []*dhtInfo) []*dhtInfo {
|
||||||
cleaned := infos[:0]
|
cleaned := infos[:0]
|
||||||
@ -313,13 +361,13 @@ func (b *bucket) drop(key *boxPubKey) {
|
|||||||
b.other = clean(b.other)
|
b.other = clean(b.other)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Sends a lookup request to the specified node.
|
||||||
func (t *dht) sendReq(req *dhtReq, dest *dhtInfo) {
|
func (t *dht) sendReq(req *dhtReq, dest *dhtInfo) {
|
||||||
// Send a dhtReq to the node in dhtInfo
|
// Send a dhtReq to the node in dhtInfo
|
||||||
bs := req.encode()
|
bs := req.encode()
|
||||||
shared := t.core.sessions.getSharedKey(&t.core.boxPriv, &dest.key)
|
shared := t.core.sessions.getSharedKey(&t.core.boxPriv, &dest.key)
|
||||||
payload, nonce := boxSeal(shared, bs, nil)
|
payload, nonce := boxSeal(shared, bs, nil)
|
||||||
p := wire_protoTrafficPacket{
|
p := wire_protoTrafficPacket{
|
||||||
TTL: ^uint64(0),
|
|
||||||
Coords: dest.coords,
|
Coords: dest.coords,
|
||||||
ToKey: dest.key,
|
ToKey: dest.key,
|
||||||
FromKey: t.core.boxPub,
|
FromKey: t.core.boxPub,
|
||||||
@ -339,13 +387,13 @@ func (t *dht) sendReq(req *dhtReq, dest *dhtInfo) {
|
|||||||
reqsToDest[req.Dest] = time.Now()
|
reqsToDest[req.Dest] = time.Now()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Sends a lookup response to the specified node.
|
||||||
func (t *dht) sendRes(res *dhtRes, req *dhtReq) {
|
func (t *dht) sendRes(res *dhtRes, req *dhtReq) {
|
||||||
// Send a reply for a dhtReq
|
// Send a reply for a dhtReq
|
||||||
bs := res.encode()
|
bs := res.encode()
|
||||||
shared := t.core.sessions.getSharedKey(&t.core.boxPriv, &req.Key)
|
shared := t.core.sessions.getSharedKey(&t.core.boxPriv, &req.Key)
|
||||||
payload, nonce := boxSeal(shared, bs, nil)
|
payload, nonce := boxSeal(shared, bs, nil)
|
||||||
p := wire_protoTrafficPacket{
|
p := wire_protoTrafficPacket{
|
||||||
TTL: ^uint64(0),
|
|
||||||
Coords: req.Coords,
|
Coords: req.Coords,
|
||||||
ToKey: req.Key,
|
ToKey: req.Key,
|
||||||
FromKey: t.core.boxPub,
|
FromKey: t.core.boxPub,
|
||||||
@ -356,10 +404,14 @@ func (t *dht) sendRes(res *dhtRes, req *dhtReq) {
|
|||||||
t.core.router.out(packet)
|
t.core.router.out(packet)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns true of a bucket contains no peers and no other nodes.
|
||||||
func (b *bucket) isEmpty() bool {
|
func (b *bucket) isEmpty() bool {
|
||||||
return len(b.peers)+len(b.other) == 0
|
return len(b.peers)+len(b.other) == 0
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets the next node that should be pinged from the bucket.
|
||||||
|
// There's a cooldown of 6 seconds between ping attempts for each node, to give them time to respond.
|
||||||
|
// It returns the least recently pinged node, subject to that send cooldown.
|
||||||
func (b *bucket) nextToPing() *dhtInfo {
|
func (b *bucket) nextToPing() *dhtInfo {
|
||||||
// Check the nodes in the bucket
|
// Check the nodes in the bucket
|
||||||
// Return whichever one responded least recently
|
// Return whichever one responded least recently
|
||||||
@ -382,12 +434,16 @@ func (b *bucket) nextToPing() *dhtInfo {
|
|||||||
return toPing
|
return toPing
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns a useful target address to ask about for pings.
|
||||||
|
// Equal to the our node's ID, except for exactly 1 bit at the bucket index.
|
||||||
func (t *dht) getTarget(bidx int) *NodeID {
|
func (t *dht) getTarget(bidx int) *NodeID {
|
||||||
targetID := t.nodeID
|
targetID := t.nodeID
|
||||||
targetID[bidx/8] ^= 0x80 >> byte(bidx%8)
|
targetID[bidx/8] ^= 0x80 >> byte(bidx%8)
|
||||||
return &targetID
|
return &targetID
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Sends a ping to a node, or removes the node if it has failed to respond to too many pings.
|
||||||
|
// If target is nil, we will ask the node about our own NodeID.
|
||||||
func (t *dht) ping(info *dhtInfo, target *NodeID) {
|
func (t *dht) ping(info *dhtInfo, target *NodeID) {
|
||||||
if info.pings > 2 {
|
if info.pings > 2 {
|
||||||
bidx, isOK := t.getBucketIndex(info.getNodeID())
|
bidx, isOK := t.getBucketIndex(info.getNodeID())
|
||||||
@ -413,6 +469,8 @@ func (t *dht) ping(info *dhtInfo, target *NodeID) {
|
|||||||
t.sendReq(&req, info)
|
t.sendReq(&req, info)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Adds a node info and target to the rumor mill.
|
||||||
|
// The node will be asked about the target at a later point, if doing so would still be useful at the time.
|
||||||
func (t *dht) addToMill(info *dhtInfo, target *NodeID) {
|
func (t *dht) addToMill(info *dhtInfo, target *NodeID) {
|
||||||
rumor := dht_rumor{
|
rumor := dht_rumor{
|
||||||
info: info,
|
info: info,
|
||||||
@ -421,6 +479,11 @@ func (t *dht) addToMill(info *dhtInfo, target *NodeID) {
|
|||||||
t.rumorMill = append(t.rumorMill, rumor)
|
t.rumorMill = append(t.rumorMill, rumor)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Regular periodic maintenance.
|
||||||
|
// If the mill is empty, it adds two pings to the rumor mill.
|
||||||
|
// The first is to the node that responded least recently, provided that it's been at least 1 minute, to make sure we eventually detect and remove unresponsive nodes.
|
||||||
|
// The second is used for bootstrapping, and attempts to fill some bucket, iterating over buckets and resetting after it hits the last non-empty one.
|
||||||
|
// If the mill is not empty, it pops nodes from the mill until it finds one that would be useful to ping (see: shouldInsert), and then pings it.
|
||||||
func (t *dht) doMaintenance() {
|
func (t *dht) doMaintenance() {
|
||||||
// First clean up reqs
|
// First clean up reqs
|
||||||
for key, reqs := range t.reqs {
|
for key, reqs := range t.reqs {
|
||||||
@ -452,20 +515,39 @@ func (t *dht) doMaintenance() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if oldest != nil && time.Since(oldest.recv) > time.Minute {
|
if oldest != nil && time.Since(oldest.recv) > time.Minute {
|
||||||
|
// Ping the oldest node in the DHT, but don't ping nodes that have been checked within the last minute
|
||||||
t.addToMill(oldest, nil)
|
t.addToMill(oldest, nil)
|
||||||
} // if the DHT isn't empty
|
}
|
||||||
// Refresh buckets
|
// Refresh buckets
|
||||||
if t.offset > last {
|
if t.offset > last {
|
||||||
t.offset = 0
|
t.offset = 0
|
||||||
}
|
}
|
||||||
target := t.getTarget(t.offset)
|
target := t.getTarget(t.offset)
|
||||||
for _, info := range t.lookup(target, true) {
|
func() {
|
||||||
if time.Since(info.recv) > time.Minute {
|
closer := t.lookup(target, false)
|
||||||
t.addToMill(info, target)
|
for _, info := range closer {
|
||||||
t.offset++
|
// Throttled ping of a node that's closer to the destination
|
||||||
break
|
if time.Since(info.recv) > info.throttle {
|
||||||
|
t.addToMill(info, target)
|
||||||
|
t.offset++
|
||||||
|
info.bootstrapSend = time.Now()
|
||||||
|
info.throttle *= 2
|
||||||
|
if info.throttle > time.Minute {
|
||||||
|
info.throttle = time.Minute
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
if len(closer) == 0 {
|
||||||
|
// If we don't know of anyone closer at all, then there's a hole in our dht
|
||||||
|
// Ping the closest node we know and ignore the throttle, to try to fill it
|
||||||
|
for _, info := range t.lookup(target, true) {
|
||||||
|
t.addToMill(info, target)
|
||||||
|
t.offset++
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
//t.offset++
|
//t.offset++
|
||||||
}
|
}
|
||||||
for len(t.rumorMill) > 0 {
|
for len(t.rumorMill) > 0 {
|
||||||
@ -484,6 +566,8 @@ func (t *dht) doMaintenance() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns true if it would be worth pinging the specified node.
|
||||||
|
// This requires that the bucket doesn't already contain the node, and that either the bucket isn't full yet or the node is closer to us in keyspace than some other node in that bucket.
|
||||||
func (t *dht) shouldInsert(info *dhtInfo) bool {
|
func (t *dht) shouldInsert(info *dhtInfo) bool {
|
||||||
bidx, isOK := t.getBucketIndex(info.getNodeID())
|
bidx, isOK := t.getBucketIndex(info.getNodeID())
|
||||||
if !isOK {
|
if !isOK {
|
||||||
@ -504,6 +588,7 @@ func (t *dht) shouldInsert(info *dhtInfo) bool {
|
|||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns true if the keyspace distance between the first and second node is smaller than the keyspace distance between the second and third node.
|
||||||
func dht_firstCloserThanThird(first *NodeID,
|
func dht_firstCloserThanThird(first *NodeID,
|
||||||
second *NodeID,
|
second *NodeID,
|
||||||
third *NodeID) bool {
|
third *NodeID) bool {
|
||||||
@ -518,11 +603,21 @@ func dht_firstCloserThanThird(first *NodeID,
|
|||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Resets the DHT in response to coord changes.
|
||||||
|
// This empties all buckets, resets the bootstrapping cycle to 0, and empties the rumor mill.
|
||||||
|
// It adds all old "other" node info to the rumor mill, so they'll be pinged quickly.
|
||||||
|
// If those nodes haven't also changed coords, then this is a relatively quick way to notify those nodes of our new coords and re-add them to our own DHT if they respond.
|
||||||
func (t *dht) reset() {
|
func (t *dht) reset() {
|
||||||
// This is mostly so bootstrapping will reset to resend coords into the network
|
// This is mostly so bootstrapping will reset to resend coords into the network
|
||||||
|
t.offset = 0
|
||||||
|
t.rumorMill = nil // reset mill
|
||||||
for _, b := range t.buckets_hidden {
|
for _, b := range t.buckets_hidden {
|
||||||
b.peers = b.peers[:0]
|
b.peers = b.peers[:0]
|
||||||
|
for _, info := range b.other {
|
||||||
|
// Add other nodes to the rumor mill so they'll be pinged soon
|
||||||
|
// This will hopefully tell them our coords and re-learn theirs quickly if they haven't changed
|
||||||
|
t.addToMill(info, info.getNodeID())
|
||||||
|
}
|
||||||
b.other = b.other[:0]
|
b.other = b.other[:0]
|
||||||
}
|
}
|
||||||
t.offset = 0
|
|
||||||
}
|
}
|
||||||
|
@ -1,14 +1,22 @@
|
|||||||
package yggdrasil
|
package yggdrasil
|
||||||
|
|
||||||
// The NDP functions are needed when you are running with a
|
// The ICMPv6 module implements functions to easily create ICMPv6
|
||||||
// TAP adapter - as the operating system expects neighbor solicitations
|
// packets. These functions, when mixed with the built-in Go IPv6
|
||||||
// for on-link traffic, this goroutine provides them
|
// and ICMP libraries, can be used to send control messages back
|
||||||
|
// to the host. Examples include:
|
||||||
|
// - NDP messages, when running in TAP mode
|
||||||
|
// - Packet Too Big messages, when packets exceed the session MTU
|
||||||
|
// - Destination Unreachable messages, when a session prohibits
|
||||||
|
// incoming traffic
|
||||||
|
|
||||||
import "net"
|
import (
|
||||||
import "golang.org/x/net/ipv6"
|
"encoding/binary"
|
||||||
import "golang.org/x/net/icmp"
|
"errors"
|
||||||
import "encoding/binary"
|
"net"
|
||||||
import "errors"
|
|
||||||
|
"golang.org/x/net/icmp"
|
||||||
|
"golang.org/x/net/ipv6"
|
||||||
|
)
|
||||||
|
|
||||||
type macAddress [6]byte
|
type macAddress [6]byte
|
||||||
|
|
||||||
@ -39,6 +47,9 @@ func ipv6Header_Marshal(h *ipv6.Header) ([]byte, error) {
|
|||||||
return b, nil
|
return b, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Initialises the ICMPv6 module by assigning our link-local IPv6 address and
|
||||||
|
// our MAC address. ICMPv6 messages will always appear to originate from these
|
||||||
|
// addresses.
|
||||||
func (i *icmpv6) init(t *tunDevice) {
|
func (i *icmpv6) init(t *tunDevice) {
|
||||||
i.tun = t
|
i.tun = t
|
||||||
|
|
||||||
@ -50,6 +61,10 @@ func (i *icmpv6) init(t *tunDevice) {
|
|||||||
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0xFE}
|
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0xFE}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Parses an incoming ICMPv6 packet. The packet provided may be either an
|
||||||
|
// ethernet frame containing an IP packet, or the IP packet alone. This is
|
||||||
|
// determined by whether the TUN/TAP adapter is running in TUN (layer 3) or
|
||||||
|
// TAP (layer 2) mode.
|
||||||
func (i *icmpv6) parse_packet(datain []byte) {
|
func (i *icmpv6) parse_packet(datain []byte) {
|
||||||
var response []byte
|
var response []byte
|
||||||
var err error
|
var err error
|
||||||
@ -69,6 +84,10 @@ func (i *icmpv6) parse_packet(datain []byte) {
|
|||||||
i.tun.iface.Write(response)
|
i.tun.iface.Write(response)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Unwraps the ethernet headers of an incoming ICMPv6 packet and hands off
|
||||||
|
// the IP packet to the parse_packet_tun function for further processing.
|
||||||
|
// A response buffer is also created for the response message, also complete
|
||||||
|
// with ethernet headers.
|
||||||
func (i *icmpv6) parse_packet_tap(datain []byte) ([]byte, error) {
|
func (i *icmpv6) parse_packet_tap(datain []byte) ([]byte, error) {
|
||||||
// Store the peer MAC address
|
// Store the peer MAC address
|
||||||
copy(i.peermac[:6], datain[6:12])
|
copy(i.peermac[:6], datain[6:12])
|
||||||
@ -97,6 +116,10 @@ func (i *icmpv6) parse_packet_tap(datain []byte) ([]byte, error) {
|
|||||||
return dataout, nil
|
return dataout, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Unwraps the IP headers of an incoming IPv6 packet and performs various
|
||||||
|
// sanity checks on the packet - i.e. is the packet an ICMPv6 packet, does the
|
||||||
|
// ICMPv6 message match a known expected type. The relevant handler function
|
||||||
|
// is then called and a response packet may be returned.
|
||||||
func (i *icmpv6) parse_packet_tun(datain []byte) ([]byte, error) {
|
func (i *icmpv6) parse_packet_tun(datain []byte) ([]byte, error) {
|
||||||
// Parse the IPv6 packet headers
|
// Parse the IPv6 packet headers
|
||||||
ipv6Header, err := ipv6.ParseHeader(datain[:ipv6.HeaderLen])
|
ipv6Header, err := ipv6.ParseHeader(datain[:ipv6.HeaderLen])
|
||||||
@ -149,6 +172,9 @@ func (i *icmpv6) parse_packet_tun(datain []byte) ([]byte, error) {
|
|||||||
return nil, errors.New("ICMPv6 type not matched")
|
return nil, errors.New("ICMPv6 type not matched")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Creates an ICMPv6 packet based on the given icmp.MessageBody and other
|
||||||
|
// parameters, complete with ethernet and IP headers, which can be written
|
||||||
|
// directly to a TAP adapter.
|
||||||
func (i *icmpv6) create_icmpv6_tap(dstmac macAddress, dst net.IP, src net.IP, mtype ipv6.ICMPType, mcode int, mbody icmp.MessageBody) ([]byte, error) {
|
func (i *icmpv6) create_icmpv6_tap(dstmac macAddress, dst net.IP, src net.IP, mtype ipv6.ICMPType, mcode int, mbody icmp.MessageBody) ([]byte, error) {
|
||||||
// Pass through to create_icmpv6_tun
|
// Pass through to create_icmpv6_tun
|
||||||
ipv6packet, err := i.create_icmpv6_tun(dst, src, mtype, mcode, mbody)
|
ipv6packet, err := i.create_icmpv6_tun(dst, src, mtype, mcode, mbody)
|
||||||
@ -169,6 +195,10 @@ func (i *icmpv6) create_icmpv6_tap(dstmac macAddress, dst net.IP, src net.IP, mt
|
|||||||
return dataout, nil
|
return dataout, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Creates an ICMPv6 packet based on the given icmp.MessageBody and other
|
||||||
|
// parameters, complete with IP headers only, which can be written directly to
|
||||||
|
// a TUN adapter, or called directly by the create_icmpv6_tap function when
|
||||||
|
// generating a message for TAP adapters.
|
||||||
func (i *icmpv6) create_icmpv6_tun(dst net.IP, src net.IP, mtype ipv6.ICMPType, mcode int, mbody icmp.MessageBody) ([]byte, error) {
|
func (i *icmpv6) create_icmpv6_tun(dst net.IP, src net.IP, mtype ipv6.ICMPType, mcode int, mbody icmp.MessageBody) ([]byte, error) {
|
||||||
// Create the ICMPv6 message
|
// Create the ICMPv6 message
|
||||||
icmpMessage := icmp.Message{
|
icmpMessage := icmp.Message{
|
||||||
@ -208,9 +238,20 @@ func (i *icmpv6) create_icmpv6_tun(dst net.IP, src net.IP, mtype ipv6.ICMPType,
|
|||||||
return responsePacket, nil
|
return responsePacket, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Generates a response to an NDP discovery packet. This is effectively called
|
||||||
|
// when the host operating system generates an NDP request for any address in
|
||||||
|
// the fd00::/8 range, so that the operating system knows to route that traffic
|
||||||
|
// to the Yggdrasil TAP adapter.
|
||||||
func (i *icmpv6) handle_ndp(in []byte) ([]byte, error) {
|
func (i *icmpv6) handle_ndp(in []byte) ([]byte, error) {
|
||||||
// Ignore NDP requests for anything outside of fd00::/8
|
// Ignore NDP requests for anything outside of fd00::/8
|
||||||
if in[8] != 0xFD {
|
var source address
|
||||||
|
copy(source[:], in[8:])
|
||||||
|
var snet subnet
|
||||||
|
copy(snet[:], in[8:])
|
||||||
|
switch {
|
||||||
|
case source.isValid():
|
||||||
|
case snet.isValid():
|
||||||
|
default:
|
||||||
return nil, errors.New("Not an NDP for fd00::/8")
|
return nil, errors.New("Not an NDP for fd00::/8")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1,10 +1,12 @@
|
|||||||
package yggdrasil
|
package yggdrasil
|
||||||
|
|
||||||
import "net"
|
import (
|
||||||
import "time"
|
"fmt"
|
||||||
import "fmt"
|
"net"
|
||||||
|
"time"
|
||||||
|
|
||||||
import "golang.org/x/net/ipv6"
|
"golang.org/x/net/ipv6"
|
||||||
|
)
|
||||||
|
|
||||||
type multicast struct {
|
type multicast struct {
|
||||||
core *Core
|
core *Core
|
||||||
@ -37,11 +39,9 @@ func (m *multicast) start() error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
//defer conn.Close() // Let it close on its own when the application exits
|
|
||||||
m.sock = ipv6.NewPacketConn(conn)
|
m.sock = ipv6.NewPacketConn(conn)
|
||||||
if err = m.sock.SetControlMessage(ipv6.FlagDst, true); err != nil {
|
if err = m.sock.SetControlMessage(ipv6.FlagDst, true); err != nil {
|
||||||
// Windows can't set this flag, so we need to handle it in other ways
|
// Windows can't set this flag, so we need to handle it in other ways
|
||||||
//panic(err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
go m.listen()
|
go m.listen()
|
||||||
@ -95,8 +95,6 @@ func (m *multicast) announce() {
|
|||||||
for {
|
for {
|
||||||
for _, iface := range m.interfaces() {
|
for _, iface := range m.interfaces() {
|
||||||
m.sock.JoinGroup(&iface, groupAddr)
|
m.sock.JoinGroup(&iface, groupAddr)
|
||||||
//err := n.sock.JoinGroup(&iface, groupAddr)
|
|
||||||
//if err != nil { panic(err) }
|
|
||||||
addrs, err := iface.Addrs()
|
addrs, err := iface.Addrs()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
@ -133,8 +131,6 @@ func (m *multicast) listen() {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
//if rcm == nil { continue } // wat
|
|
||||||
//fmt.Println("DEBUG:", "packet from:", fromAddr.String())
|
|
||||||
if rcm != nil {
|
if rcm != nil {
|
||||||
// Windows can't set the flag needed to return a non-nil value here
|
// Windows can't set the flag needed to return a non-nil value here
|
||||||
// So only make these checks if we get something useful back
|
// So only make these checks if we get something useful back
|
||||||
@ -149,19 +145,14 @@ func (m *multicast) listen() {
|
|||||||
anAddr := string(bs[:nBytes])
|
anAddr := string(bs[:nBytes])
|
||||||
addr, err := net.ResolveTCPAddr("tcp6", anAddr)
|
addr, err := net.ResolveTCPAddr("tcp6", anAddr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
panic(err)
|
|
||||||
continue
|
continue
|
||||||
} // Panic for testing, remove later
|
}
|
||||||
from := fromAddr.(*net.UDPAddr)
|
from := fromAddr.(*net.UDPAddr)
|
||||||
//fmt.Println("DEBUG:", "heard:", addr.IP.String(), "from:", from.IP.String())
|
|
||||||
if addr.IP.String() != from.IP.String() {
|
if addr.IP.String() != from.IP.String() {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
addr.Zone = from.Zone
|
addr.Zone = from.Zone
|
||||||
saddr := addr.String()
|
saddr := addr.String()
|
||||||
//if _, isIn := n.peers[saddr]; isIn { continue }
|
|
||||||
//n.peers[saddr] = struct{}{}
|
|
||||||
m.core.tcp.connect(saddr)
|
m.core.tcp.connect(saddr)
|
||||||
//fmt.Println("DEBUG:", "added multicast peer:", saddr)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -4,40 +4,25 @@ package yggdrasil
|
|||||||
// Commented code should be removed
|
// Commented code should be removed
|
||||||
// Live code should be better commented
|
// Live code should be better commented
|
||||||
|
|
||||||
// FIXME (!) this part may be at least sligtly vulnerable to replay attacks
|
import (
|
||||||
// The switch message part should catch / drop old tstamps
|
"sync"
|
||||||
// So the damage is limited
|
"sync/atomic"
|
||||||
// But you could still mess up msgAnc / msgHops and break some things there
|
"time"
|
||||||
// It needs to ignore messages with a lower seq
|
)
|
||||||
// Probably best to start setting seq to a timestamp in that case...
|
|
||||||
|
|
||||||
// FIXME (!?) if it takes too long to communicate all the msgHops, then things hit a horizon
|
|
||||||
// That could happen with a peer over a high-latency link, with many msgHops
|
|
||||||
// Possible workarounds:
|
|
||||||
// 1. Pre-emptively send all hops when one is requested, or after any change
|
|
||||||
// Maybe requires changing how the throttle works and msgHops are saved
|
|
||||||
// In case some arrive out of order or are dropped
|
|
||||||
// This is relatively easy to implement, but could be wasteful
|
|
||||||
// 2. Save your old locator, sigs, etc, so you can respond to older ancs
|
|
||||||
// And finish requesting an old anc before updating to a new one
|
|
||||||
// But that may lead to other issues if not done carefully...
|
|
||||||
|
|
||||||
import "time"
|
|
||||||
import "sync"
|
|
||||||
import "sync/atomic"
|
|
||||||
import "math"
|
|
||||||
|
|
||||||
//import "fmt"
|
|
||||||
|
|
||||||
|
// The peers struct represents peers with an active connection.
|
||||||
|
// Incomping packets are passed to the corresponding peer, which handles them somehow.
|
||||||
|
// In most cases, this involves passing the packet to the handler for outgoing traffic to another peer.
|
||||||
|
// In other cases, it's link protocol traffic used to build the spanning tree, in which case this checks signatures and passes the message along to the switch.
|
||||||
type peers struct {
|
type peers struct {
|
||||||
core *Core
|
core *Core
|
||||||
mutex sync.Mutex // Synchronize writes to atomic
|
mutex sync.Mutex // Synchronize writes to atomic
|
||||||
ports atomic.Value //map[Port]*peer, use CoW semantics
|
ports atomic.Value //map[Port]*peer, use CoW semantics
|
||||||
//ports map[Port]*peer
|
|
||||||
authMutex sync.RWMutex
|
authMutex sync.RWMutex
|
||||||
allowedEncryptionPublicKeys map[boxPubKey]struct{}
|
allowedEncryptionPublicKeys map[boxPubKey]struct{}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Initializes the peers struct.
|
||||||
func (ps *peers) init(c *Core) {
|
func (ps *peers) init(c *Core) {
|
||||||
ps.mutex.Lock()
|
ps.mutex.Lock()
|
||||||
defer ps.mutex.Unlock()
|
defer ps.mutex.Unlock()
|
||||||
@ -46,6 +31,7 @@ func (ps *peers) init(c *Core) {
|
|||||||
ps.allowedEncryptionPublicKeys = make(map[boxPubKey]struct{})
|
ps.allowedEncryptionPublicKeys = make(map[boxPubKey]struct{})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns true if an incoming peer connection to a key is allowed, either because the key is in the whitelist or because the whitelist is empty.
|
||||||
func (ps *peers) isAllowedEncryptionPublicKey(box *boxPubKey) bool {
|
func (ps *peers) isAllowedEncryptionPublicKey(box *boxPubKey) bool {
|
||||||
ps.authMutex.RLock()
|
ps.authMutex.RLock()
|
||||||
defer ps.authMutex.RUnlock()
|
defer ps.authMutex.RUnlock()
|
||||||
@ -53,18 +39,21 @@ func (ps *peers) isAllowedEncryptionPublicKey(box *boxPubKey) bool {
|
|||||||
return isIn || len(ps.allowedEncryptionPublicKeys) == 0
|
return isIn || len(ps.allowedEncryptionPublicKeys) == 0
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Adds a key to the whitelist.
|
||||||
func (ps *peers) addAllowedEncryptionPublicKey(box *boxPubKey) {
|
func (ps *peers) addAllowedEncryptionPublicKey(box *boxPubKey) {
|
||||||
ps.authMutex.Lock()
|
ps.authMutex.Lock()
|
||||||
defer ps.authMutex.Unlock()
|
defer ps.authMutex.Unlock()
|
||||||
ps.allowedEncryptionPublicKeys[*box] = struct{}{}
|
ps.allowedEncryptionPublicKeys[*box] = struct{}{}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Removes a key from the whitelist.
|
||||||
func (ps *peers) removeAllowedEncryptionPublicKey(box *boxPubKey) {
|
func (ps *peers) removeAllowedEncryptionPublicKey(box *boxPubKey) {
|
||||||
ps.authMutex.Lock()
|
ps.authMutex.Lock()
|
||||||
defer ps.authMutex.Unlock()
|
defer ps.authMutex.Unlock()
|
||||||
delete(ps.allowedEncryptionPublicKeys, *box)
|
delete(ps.allowedEncryptionPublicKeys, *box)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets the whitelist of allowed keys for incoming connections.
|
||||||
func (ps *peers) getAllowedEncryptionPublicKeys() []boxPubKey {
|
func (ps *peers) getAllowedEncryptionPublicKeys() []boxPubKey {
|
||||||
ps.authMutex.RLock()
|
ps.authMutex.RLock()
|
||||||
defer ps.authMutex.RUnlock()
|
defer ps.authMutex.RUnlock()
|
||||||
@ -75,74 +64,56 @@ func (ps *peers) getAllowedEncryptionPublicKeys() []boxPubKey {
|
|||||||
return keys
|
return keys
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Atomically gets a map[switchPort]*peer of known peers.
|
||||||
func (ps *peers) getPorts() map[switchPort]*peer {
|
func (ps *peers) getPorts() map[switchPort]*peer {
|
||||||
return ps.ports.Load().(map[switchPort]*peer)
|
return ps.ports.Load().(map[switchPort]*peer)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Stores a map[switchPort]*peer (note that you should take a mutex before store operations to avoid conflicts with other nodes attempting to read/change/store at the same time).
|
||||||
func (ps *peers) putPorts(ports map[switchPort]*peer) {
|
func (ps *peers) putPorts(ports map[switchPort]*peer) {
|
||||||
ps.ports.Store(ports)
|
ps.ports.Store(ports)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Information known about a peer, including thier box/sig keys, precomputed shared keys (static and ephemeral), a handler for their outgoing traffic, and queue sizes for local backpressure.
|
||||||
type peer struct {
|
type peer struct {
|
||||||
// Rolling approximation of bandwidth, in bps, used by switch, updated by packet sends
|
queueSize int64 // used to track local backpressure
|
||||||
// use get/update methods only! (atomic accessors as float64)
|
|
||||||
bandwidth uint64
|
|
||||||
bytesSent uint64 // To track bandwidth usage for getPeers
|
bytesSent uint64 // To track bandwidth usage for getPeers
|
||||||
bytesRecvd uint64 // To track bandwidth usage for getPeers
|
bytesRecvd uint64 // To track bandwidth usage for getPeers
|
||||||
// BUG: sync/atomic, 32 bit platforms need the above to be the first element
|
// BUG: sync/atomic, 32 bit platforms need the above to be the first element
|
||||||
firstSeen time.Time // To track uptime for getPeers
|
core *Core
|
||||||
box boxPubKey
|
port switchPort
|
||||||
sig sigPubKey
|
box boxPubKey
|
||||||
shared boxSharedKey
|
sig sigPubKey
|
||||||
//in <-chan []byte
|
shared boxSharedKey
|
||||||
//out chan<- []byte
|
linkShared boxSharedKey
|
||||||
//in func([]byte)
|
firstSeen time.Time // To track uptime for getPeers
|
||||||
out func([]byte)
|
linkOut (chan []byte) // used for protocol traffic (to bypass queues)
|
||||||
core *Core
|
doSend (chan struct{}) // tell the linkLoop to send a switchMsg
|
||||||
port switchPort
|
dinfo *dhtInfo // used to keep the DHT working
|
||||||
msgAnc *msgAnnounce
|
out func([]byte) // Set up by whatever created the peers struct, used to send packets to other nodes
|
||||||
msgHops []*msgHop
|
close func() // Called when a peer is removed, to close the underlying connection, or via admin api
|
||||||
myMsg *switchMessage
|
|
||||||
mySigs []sigInfo
|
|
||||||
// This is used to limit how often we perform expensive operations
|
|
||||||
// Specifically, processing switch messages, signing, and verifying sigs
|
|
||||||
// Resets at the start of each tick
|
|
||||||
throttle uint8
|
|
||||||
// Called when a peer is removed, to close the underlying connection, or via admin api
|
|
||||||
close func()
|
|
||||||
// To allow the peer to call close if idle for too long
|
|
||||||
lastAnc time.Time
|
|
||||||
}
|
}
|
||||||
|
|
||||||
const peer_Throttle = 1
|
// Size of the queue of packets to be sent to the node.
|
||||||
|
func (p *peer) getQueueSize() int64 {
|
||||||
func (p *peer) getBandwidth() float64 {
|
return atomic.LoadInt64(&p.queueSize)
|
||||||
bits := atomic.LoadUint64(&p.bandwidth)
|
|
||||||
return math.Float64frombits(bits)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *peer) updateBandwidth(bytes int, duration time.Duration) {
|
// Used to increment or decrement the queue.
|
||||||
if p == nil {
|
func (p *peer) updateQueueSize(delta int64) {
|
||||||
return
|
atomic.AddInt64(&p.queueSize, delta)
|
||||||
}
|
|
||||||
for ok := false; !ok; {
|
|
||||||
oldBits := atomic.LoadUint64(&p.bandwidth)
|
|
||||||
oldBandwidth := math.Float64frombits(oldBits)
|
|
||||||
bandwidth := oldBandwidth*7/8 + float64(bytes)/duration.Seconds()
|
|
||||||
bits := math.Float64bits(bandwidth)
|
|
||||||
ok = atomic.CompareAndSwapUint64(&p.bandwidth, oldBits, bits)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ps *peers) newPeer(box *boxPubKey,
|
// Creates a new peer with the specified box, sig, and linkShared keys, using the lowest unocupied port number.
|
||||||
sig *sigPubKey) *peer {
|
func (ps *peers) newPeer(box *boxPubKey, sig *sigPubKey, linkShared *boxSharedKey) *peer {
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
p := peer{box: *box,
|
p := peer{box: *box,
|
||||||
sig: *sig,
|
sig: *sig,
|
||||||
shared: *getSharedKey(&ps.core.boxPriv, box),
|
shared: *getSharedKey(&ps.core.boxPriv, box),
|
||||||
lastAnc: now,
|
linkShared: *linkShared,
|
||||||
firstSeen: now,
|
firstSeen: now,
|
||||||
core: ps.core}
|
doSend: make(chan struct{}, 1),
|
||||||
|
core: ps.core}
|
||||||
ps.mutex.Lock()
|
ps.mutex.Lock()
|
||||||
defer ps.mutex.Unlock()
|
defer ps.mutex.Unlock()
|
||||||
oldPorts := ps.getPorts()
|
oldPorts := ps.getPorts()
|
||||||
@ -161,11 +132,14 @@ func (ps *peers) newPeer(box *boxPubKey,
|
|||||||
return &p
|
return &p
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Removes a peer for a given port, if one exists.
|
||||||
func (ps *peers) removePeer(port switchPort) {
|
func (ps *peers) removePeer(port switchPort) {
|
||||||
// TODO? store linkIn in the peer struct, close it here? (once)
|
|
||||||
if port == 0 {
|
if port == 0 {
|
||||||
return
|
return
|
||||||
} // Can't remove self peer
|
} // Can't remove self peer
|
||||||
|
ps.core.router.doAdmin(func() {
|
||||||
|
ps.core.switchTable.unlockedRemovePeer(port)
|
||||||
|
})
|
||||||
ps.mutex.Lock()
|
ps.mutex.Lock()
|
||||||
oldPorts := ps.getPorts()
|
oldPorts := ps.getPorts()
|
||||||
p, isIn := oldPorts[port]
|
p, isIn := oldPorts[port]
|
||||||
@ -176,57 +150,61 @@ func (ps *peers) removePeer(port switchPort) {
|
|||||||
delete(newPorts, port)
|
delete(newPorts, port)
|
||||||
ps.putPorts(newPorts)
|
ps.putPorts(newPorts)
|
||||||
ps.mutex.Unlock()
|
ps.mutex.Unlock()
|
||||||
if isIn && p.close != nil {
|
if isIn {
|
||||||
p.close()
|
if p.close != nil {
|
||||||
|
p.close()
|
||||||
|
}
|
||||||
|
close(p.doSend)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *peer) linkLoop(in <-chan []byte) {
|
// If called, sends a notification to each peer that they should send a new switch message.
|
||||||
ticker := time.NewTicker(time.Second)
|
// Mainly called by the switch after an update.
|
||||||
defer ticker.Stop()
|
func (ps *peers) sendSwitchMsgs() {
|
||||||
var counter uint8
|
ports := ps.getPorts()
|
||||||
var lastRSeq uint64
|
for _, p := range ports {
|
||||||
|
if p.port == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
p.doSendSwitchMsgs()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If called, sends a notification to the peer's linkLoop to trigger a switchMsg send.
|
||||||
|
// Mainly called by sendSwitchMsgs or during linkLoop startup.
|
||||||
|
func (p *peer) doSendSwitchMsgs() {
|
||||||
|
defer func() { recover() }() // In case there's a race with close(p.doSend)
|
||||||
|
select {
|
||||||
|
case p.doSend <- struct{}{}:
|
||||||
|
default:
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// This must be launched in a separate goroutine by whatever sets up the peer struct.
|
||||||
|
// It handles link protocol traffic.
|
||||||
|
func (p *peer) linkLoop() {
|
||||||
|
go p.doSendSwitchMsgs()
|
||||||
|
tick := time.NewTicker(time.Second)
|
||||||
|
defer tick.Stop()
|
||||||
for {
|
for {
|
||||||
select {
|
select {
|
||||||
case packet, ok := <-in:
|
case _, ok := <-p.doSend:
|
||||||
if !ok {
|
if !ok {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
p.handleLinkTraffic(packet)
|
p.sendSwitchMsg()
|
||||||
case <-ticker.C:
|
case _ = <-tick.C:
|
||||||
if time.Since(p.lastAnc) > 16*time.Second && p.close != nil {
|
if p.dinfo != nil {
|
||||||
// Seems to have timed out, try to trigger a close
|
p.core.dht.peers <- p.dinfo
|
||||||
p.close()
|
|
||||||
}
|
}
|
||||||
p.throttle = 0
|
|
||||||
if p.port == 0 {
|
|
||||||
continue
|
|
||||||
} // Don't send announces on selfInterface
|
|
||||||
p.myMsg, p.mySigs = p.core.switchTable.createMessage(p.port)
|
|
||||||
var update bool
|
|
||||||
switch {
|
|
||||||
case p.msgAnc == nil:
|
|
||||||
update = true
|
|
||||||
case lastRSeq != p.msgAnc.Seq:
|
|
||||||
update = true
|
|
||||||
case p.msgAnc.Rseq != p.myMsg.seq:
|
|
||||||
update = true
|
|
||||||
case counter%4 == 0:
|
|
||||||
update = true
|
|
||||||
}
|
|
||||||
if update {
|
|
||||||
if p.msgAnc != nil {
|
|
||||||
lastRSeq = p.msgAnc.Seq
|
|
||||||
}
|
|
||||||
p.sendSwitchAnnounce()
|
|
||||||
}
|
|
||||||
counter = (counter + 1) % 4
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *peer) handlePacket(packet []byte, linkIn chan<- []byte) {
|
// Called to handle incoming packets.
|
||||||
// TODO See comment in sendPacket about atomics technically being done wrong
|
// Passes the packet to a handler for that packet type.
|
||||||
|
func (p *peer) handlePacket(packet []byte) {
|
||||||
|
// FIXME this is off by stream padding and msg length overhead, should be done in tcp.go
|
||||||
atomic.AddUint64(&p.bytesRecvd, uint64(len(packet)))
|
atomic.AddUint64(&p.bytesRecvd, uint64(len(packet)))
|
||||||
pType, pTypeLen := wire_decode_uint64(packet)
|
pType, pTypeLen := wire_decode_uint64(packet)
|
||||||
if pTypeLen == 0 {
|
if pTypeLen == 0 {
|
||||||
@ -238,31 +216,24 @@ func (p *peer) handlePacket(packet []byte, linkIn chan<- []byte) {
|
|||||||
case wire_ProtocolTraffic:
|
case wire_ProtocolTraffic:
|
||||||
p.handleTraffic(packet, pTypeLen)
|
p.handleTraffic(packet, pTypeLen)
|
||||||
case wire_LinkProtocolTraffic:
|
case wire_LinkProtocolTraffic:
|
||||||
{
|
p.handleLinkTraffic(packet)
|
||||||
select {
|
default:
|
||||||
case linkIn <- packet:
|
util_putBytes(packet)
|
||||||
default:
|
|
||||||
}
|
|
||||||
}
|
|
||||||
default: /*panic(pType) ;*/
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Called to handle traffic or protocolTraffic packets.
|
||||||
|
// In either case, this reads from the coords of the packet header, does a switch lookup, and forwards to the next node.
|
||||||
func (p *peer) handleTraffic(packet []byte, pTypeLen int) {
|
func (p *peer) handleTraffic(packet []byte, pTypeLen int) {
|
||||||
if p.port != 0 && p.msgAnc == nil {
|
if p.port != 0 && p.dinfo == nil {
|
||||||
// Drop traffic until the peer manages to send us at least one anc
|
// Drop traffic until the peer manages to send us at least one good switchMsg
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
ttl, ttlLen := wire_decode_uint64(packet[pTypeLen:])
|
coords, coordLen := wire_decode_coords(packet[pTypeLen:])
|
||||||
ttlBegin := pTypeLen
|
if coordLen >= len(packet) {
|
||||||
ttlEnd := pTypeLen + ttlLen
|
|
||||||
coords, coordLen := wire_decode_coords(packet[ttlEnd:])
|
|
||||||
coordEnd := ttlEnd + coordLen
|
|
||||||
if coordEnd == len(packet) {
|
|
||||||
return
|
return
|
||||||
} // No payload
|
} // No payload
|
||||||
toPort, newTTL := p.core.switchTable.lookup(coords, ttl)
|
toPort := p.core.switchTable.lookup(coords)
|
||||||
if toPort == p.port {
|
if toPort == p.port {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@ -270,40 +241,50 @@ func (p *peer) handleTraffic(packet []byte, pTypeLen int) {
|
|||||||
if to == nil {
|
if to == nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
// This mutates the packet in-place if the length of the TTL changes!
|
|
||||||
ttlSlice := wire_encode_uint64(newTTL)
|
|
||||||
newTTLLen := len(ttlSlice)
|
|
||||||
shift := ttlLen - newTTLLen
|
|
||||||
copy(packet[shift:], packet[:pTypeLen])
|
|
||||||
copy(packet[ttlBegin+shift:], ttlSlice)
|
|
||||||
packet = packet[shift:]
|
|
||||||
to.sendPacket(packet)
|
to.sendPacket(packet)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This just calls p.out(packet) for now.
|
||||||
func (p *peer) sendPacket(packet []byte) {
|
func (p *peer) sendPacket(packet []byte) {
|
||||||
// Is there ever a case where something more complicated is needed?
|
// Is there ever a case where something more complicated is needed?
|
||||||
// What if p.out blocks?
|
// What if p.out blocks?
|
||||||
p.out(packet)
|
p.out(packet)
|
||||||
// TODO this should really happen at the interface, to account for LIFO packet drops and additional per-packet/per-message overhead, but this should be pretty close... better to move it to the tcp/udp stuff *after* rewriting both to give a common interface
|
|
||||||
atomic.AddUint64(&p.bytesSent, uint64(len(packet)))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This wraps the packet in the inner (ephemeral) and outer (permanent) crypto layers.
|
||||||
|
// It sends it to p.linkOut, which bypasses the usual packet queues.
|
||||||
func (p *peer) sendLinkPacket(packet []byte) {
|
func (p *peer) sendLinkPacket(packet []byte) {
|
||||||
bs, nonce := boxSeal(&p.shared, packet, nil)
|
innerPayload, innerNonce := boxSeal(&p.linkShared, packet, nil)
|
||||||
|
innerLinkPacket := wire_linkProtoTrafficPacket{
|
||||||
|
Nonce: *innerNonce,
|
||||||
|
Payload: innerPayload,
|
||||||
|
}
|
||||||
|
outerPayload := innerLinkPacket.encode()
|
||||||
|
bs, nonce := boxSeal(&p.shared, outerPayload, nil)
|
||||||
linkPacket := wire_linkProtoTrafficPacket{
|
linkPacket := wire_linkProtoTrafficPacket{
|
||||||
Nonce: *nonce,
|
Nonce: *nonce,
|
||||||
Payload: bs,
|
Payload: bs,
|
||||||
}
|
}
|
||||||
packet = linkPacket.encode()
|
packet = linkPacket.encode()
|
||||||
p.sendPacket(packet)
|
p.linkOut <- packet
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Decrypts the outer (permanent) and inner (ephemeral) crypto layers on link traffic.
|
||||||
|
// Identifies the link traffic type and calls the appropriate handler.
|
||||||
func (p *peer) handleLinkTraffic(bs []byte) {
|
func (p *peer) handleLinkTraffic(bs []byte) {
|
||||||
packet := wire_linkProtoTrafficPacket{}
|
packet := wire_linkProtoTrafficPacket{}
|
||||||
if !packet.decode(bs) {
|
if !packet.decode(bs) {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
payload, isOK := boxOpen(&p.shared, packet.Payload, &packet.Nonce)
|
outerPayload, isOK := boxOpen(&p.shared, packet.Payload, &packet.Nonce)
|
||||||
|
if !isOK {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
innerPacket := wire_linkProtoTrafficPacket{}
|
||||||
|
if !innerPacket.decode(outerPayload) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
payload, isOK := boxOpen(&p.linkShared, innerPacket.Payload, &innerPacket.Nonce)
|
||||||
if !isOK {
|
if !isOK {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@ -312,219 +293,80 @@ func (p *peer) handleLinkTraffic(bs []byte) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
switch pType {
|
switch pType {
|
||||||
case wire_SwitchAnnounce:
|
case wire_SwitchMsg:
|
||||||
p.handleSwitchAnnounce(payload)
|
p.handleSwitchMsg(payload)
|
||||||
case wire_SwitchHopRequest:
|
default:
|
||||||
p.handleSwitchHopRequest(payload)
|
util_putBytes(bs)
|
||||||
case wire_SwitchHop:
|
|
||||||
p.handleSwitchHop(payload)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *peer) handleSwitchAnnounce(packet []byte) {
|
// Gets a switchMsg from the switch, adds signed next-hop info for this peer, and sends it to them.
|
||||||
//p.core.log.Println("DEBUG: handleSwitchAnnounce")
|
func (p *peer) sendSwitchMsg() {
|
||||||
anc := msgAnnounce{}
|
msg := p.core.switchTable.getMsg()
|
||||||
//err := wire_decode_struct(packet, &anc)
|
if msg == nil {
|
||||||
//if err != nil { return }
|
|
||||||
if !anc.decode(packet) {
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
//if p.msgAnc != nil && anc.Seq != p.msgAnc.Seq { p.msgHops = nil }
|
bs := getBytesForSig(&p.sig, msg)
|
||||||
if p.msgAnc == nil ||
|
msg.Hops = append(msg.Hops, switchMsgHop{
|
||||||
anc.Root != p.msgAnc.Root ||
|
Port: p.port,
|
||||||
anc.Tstamp != p.msgAnc.Tstamp ||
|
Next: p.sig,
|
||||||
anc.Seq != p.msgAnc.Seq {
|
Sig: *sign(&p.core.sigPriv, bs),
|
||||||
p.msgHops = nil
|
})
|
||||||
}
|
packet := msg.encode()
|
||||||
p.msgAnc = &anc
|
|
||||||
p.processSwitchMessage()
|
|
||||||
p.lastAnc = time.Now()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *peer) requestHop(hop uint64) {
|
|
||||||
//p.core.log.Println("DEBUG requestHop")
|
|
||||||
req := msgHopReq{}
|
|
||||||
req.Root = p.msgAnc.Root
|
|
||||||
req.Tstamp = p.msgAnc.Tstamp
|
|
||||||
req.Seq = p.msgAnc.Seq
|
|
||||||
req.Hop = hop
|
|
||||||
packet := req.encode()
|
|
||||||
p.sendLinkPacket(packet)
|
p.sendLinkPacket(packet)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *peer) handleSwitchHopRequest(packet []byte) {
|
// Handles a switchMsg from the peer, checking signatures and passing good messages to the switch.
|
||||||
//p.core.log.Println("DEBUG: handleSwitchHopRequest")
|
// Also creates a dhtInfo struct and arranges for it to be added to the dht (this is how dht bootstrapping begins).
|
||||||
if p.throttle > peer_Throttle {
|
func (p *peer) handleSwitchMsg(packet []byte) {
|
||||||
|
var msg switchMsg
|
||||||
|
if !msg.decode(packet) {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if p.myMsg == nil {
|
if len(msg.Hops) < 1 {
|
||||||
return
|
p.core.peers.removePeer(p.port)
|
||||||
}
|
}
|
||||||
req := msgHopReq{}
|
var loc switchLocator
|
||||||
if !req.decode(packet) {
|
prevKey := msg.Root
|
||||||
return
|
for idx, hop := range msg.Hops {
|
||||||
}
|
// Check signatures and collect coords for dht
|
||||||
if req.Root != p.myMsg.locator.root {
|
sigMsg := msg
|
||||||
return
|
sigMsg.Hops = msg.Hops[:idx]
|
||||||
}
|
|
||||||
if req.Tstamp != p.myMsg.locator.tstamp {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if req.Seq != p.myMsg.seq {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if uint64(len(p.myMsg.locator.coords)) <= req.Hop {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
res := msgHop{}
|
|
||||||
res.Root = p.myMsg.locator.root
|
|
||||||
res.Tstamp = p.myMsg.locator.tstamp
|
|
||||||
res.Seq = p.myMsg.seq
|
|
||||||
res.Hop = req.Hop
|
|
||||||
res.Port = p.myMsg.locator.coords[res.Hop]
|
|
||||||
sinfo := p.getSig(res.Hop)
|
|
||||||
//p.core.log.Println("DEBUG sig:", sinfo)
|
|
||||||
res.Next = sinfo.next
|
|
||||||
res.Sig = sinfo.sig
|
|
||||||
packet = res.encode()
|
|
||||||
p.sendLinkPacket(packet)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *peer) handleSwitchHop(packet []byte) {
|
|
||||||
//p.core.log.Println("DEBUG: handleSwitchHop")
|
|
||||||
if p.throttle > peer_Throttle {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if p.msgAnc == nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
res := msgHop{}
|
|
||||||
if !res.decode(packet) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if res.Root != p.msgAnc.Root {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if res.Tstamp != p.msgAnc.Tstamp {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if res.Seq != p.msgAnc.Seq {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if res.Hop != uint64(len(p.msgHops)) {
|
|
||||||
return
|
|
||||||
} // always process in order
|
|
||||||
loc := switchLocator{coords: make([]switchPort, 0, len(p.msgHops)+1)}
|
|
||||||
loc.root = res.Root
|
|
||||||
loc.tstamp = res.Tstamp
|
|
||||||
for _, hop := range p.msgHops {
|
|
||||||
loc.coords = append(loc.coords, hop.Port)
|
loc.coords = append(loc.coords, hop.Port)
|
||||||
}
|
bs := getBytesForSig(&hop.Next, &sigMsg)
|
||||||
loc.coords = append(loc.coords, res.Port)
|
if !p.core.sigs.check(&prevKey, &hop.Sig, bs) {
|
||||||
thisHopKey := &res.Root
|
p.core.peers.removePeer(p.port)
|
||||||
if res.Hop != 0 {
|
|
||||||
thisHopKey = &p.msgHops[res.Hop-1].Next
|
|
||||||
}
|
|
||||||
bs := getBytesForSig(&res.Next, &loc)
|
|
||||||
if p.core.sigs.check(thisHopKey, &res.Sig, bs) {
|
|
||||||
p.msgHops = append(p.msgHops, &res)
|
|
||||||
p.processSwitchMessage()
|
|
||||||
} else {
|
|
||||||
p.throttle++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *peer) processSwitchMessage() {
|
|
||||||
//p.core.log.Println("DEBUG: processSwitchMessage")
|
|
||||||
if p.throttle > peer_Throttle {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if p.msgAnc == nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if uint64(len(p.msgHops)) < p.msgAnc.Len {
|
|
||||||
p.requestHop(uint64(len(p.msgHops)))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
p.throttle++
|
|
||||||
if p.msgAnc.Len != uint64(len(p.msgHops)) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
msg := switchMessage{}
|
|
||||||
coords := make([]switchPort, 0, len(p.msgHops))
|
|
||||||
sigs := make([]sigInfo, 0, len(p.msgHops))
|
|
||||||
for idx, hop := range p.msgHops {
|
|
||||||
// Consistency checks, should be redundant (already checked these...)
|
|
||||||
if hop.Root != p.msgAnc.Root {
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
if hop.Tstamp != p.msgAnc.Tstamp {
|
prevKey = hop.Next
|
||||||
return
|
|
||||||
}
|
|
||||||
if hop.Seq != p.msgAnc.Seq {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if hop.Hop != uint64(idx) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
coords = append(coords, hop.Port)
|
|
||||||
sigs = append(sigs, sigInfo{next: hop.Next, sig: hop.Sig})
|
|
||||||
}
|
}
|
||||||
msg.from = p.sig
|
p.core.switchTable.handleMsg(&msg, p.port)
|
||||||
msg.locator.root = p.msgAnc.Root
|
if !p.core.switchTable.checkRoot(&msg) {
|
||||||
msg.locator.tstamp = p.msgAnc.Tstamp
|
// Bad switch message
|
||||||
msg.locator.coords = coords
|
// Stop forwarding traffic from it
|
||||||
msg.seq = p.msgAnc.Seq
|
// Stop refreshing it in the DHT
|
||||||
//msg.RSeq = p.msgAnc.RSeq
|
p.dinfo = nil
|
||||||
//msg.Degree = p.msgAnc.Deg
|
|
||||||
p.core.switchTable.handleMessage(&msg, p.port, sigs)
|
|
||||||
if len(coords) == 0 {
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
// Reuse locator, set the coords to the peer's coords, to use in dht
|
|
||||||
msg.locator.coords = coords[:len(coords)-1]
|
|
||||||
// Pass a mesage to the dht informing it that this peer (still) exists
|
// Pass a mesage to the dht informing it that this peer (still) exists
|
||||||
|
loc.coords = loc.coords[:len(loc.coords)-1]
|
||||||
dinfo := dhtInfo{
|
dinfo := dhtInfo{
|
||||||
key: p.box,
|
key: p.box,
|
||||||
coords: msg.locator.getCoords(),
|
coords: loc.getCoords(),
|
||||||
}
|
}
|
||||||
p.core.dht.peers <- &dinfo
|
p.core.dht.peers <- &dinfo
|
||||||
|
p.dinfo = &dinfo
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *peer) sendSwitchAnnounce() {
|
// This generates the bytes that we sign or check the signature of for a switchMsg.
|
||||||
anc := msgAnnounce{}
|
// It begins with the next node's key, followed by the root and the timetsamp, followed by coords being advertised to the next node.
|
||||||
anc.Root = p.myMsg.locator.root
|
func getBytesForSig(next *sigPubKey, msg *switchMsg) []byte {
|
||||||
anc.Tstamp = p.myMsg.locator.tstamp
|
var loc switchLocator
|
||||||
anc.Seq = p.myMsg.seq
|
for _, hop := range msg.Hops {
|
||||||
anc.Len = uint64(len(p.myMsg.locator.coords))
|
loc.coords = append(loc.coords, hop.Port)
|
||||||
//anc.Deg = p.myMsg.Degree
|
|
||||||
if p.msgAnc != nil {
|
|
||||||
anc.Rseq = p.msgAnc.Seq
|
|
||||||
}
|
}
|
||||||
packet := anc.encode()
|
|
||||||
p.sendLinkPacket(packet)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *peer) getSig(hop uint64) sigInfo {
|
|
||||||
//p.core.log.Println("DEBUG getSig:", len(p.mySigs), hop)
|
|
||||||
if hop < uint64(len(p.mySigs)) {
|
|
||||||
return p.mySigs[hop]
|
|
||||||
}
|
|
||||||
bs := getBytesForSig(&p.sig, &p.myMsg.locator)
|
|
||||||
sig := sigInfo{}
|
|
||||||
sig.next = p.sig
|
|
||||||
sig.sig = *sign(&p.core.sigPriv, bs)
|
|
||||||
p.mySigs = append(p.mySigs, sig)
|
|
||||||
//p.core.log.Println("DEBUG sig bs:", bs)
|
|
||||||
return sig
|
|
||||||
}
|
|
||||||
|
|
||||||
func getBytesForSig(next *sigPubKey, loc *switchLocator) []byte {
|
|
||||||
//bs, err := wire_encode_locator(loc)
|
|
||||||
//if err != nil { panic(err) }
|
|
||||||
bs := append([]byte(nil), next[:]...)
|
bs := append([]byte(nil), next[:]...)
|
||||||
bs = append(bs, wire_encode_locator(loc)...)
|
bs = append(bs, msg.Root[:]...)
|
||||||
//bs := wire_encode_locator(loc)
|
bs = append(bs, wire_encode_uint64(wire_intToUint(msg.TStamp))...)
|
||||||
//bs = append(next[:], bs...)
|
bs = append(bs, wire_encode_coords(loc.getCoords())...)
|
||||||
return bs
|
return bs
|
||||||
}
|
}
|
||||||
|
@ -2,8 +2,10 @@
|
|||||||
|
|
||||||
package yggdrasil
|
package yggdrasil
|
||||||
|
|
||||||
import "errors"
|
import (
|
||||||
import "log"
|
"errors"
|
||||||
|
"log"
|
||||||
|
)
|
||||||
|
|
||||||
// Starts the function profiler. This is only supported when built with
|
// Starts the function profiler. This is only supported when built with
|
||||||
// '-tags build'.
|
// '-tags build'.
|
||||||
|
@ -22,13 +22,15 @@ package yggdrasil
|
|||||||
// The packet is passed to the session, which decrypts it, router.recvPacket
|
// The packet is passed to the session, which decrypts it, router.recvPacket
|
||||||
// The router then runs some sanity checks before passing it to the tun
|
// The router then runs some sanity checks before passing it to the tun
|
||||||
|
|
||||||
import "time"
|
import (
|
||||||
import "golang.org/x/net/icmp"
|
"time"
|
||||||
import "golang.org/x/net/ipv6"
|
|
||||||
|
|
||||||
//import "fmt"
|
"golang.org/x/net/icmp"
|
||||||
//import "net"
|
"golang.org/x/net/ipv6"
|
||||||
|
)
|
||||||
|
|
||||||
|
// The router struct has channels to/from the tun/tap device and a self peer (0), which is how messages are passed between this node and the peers/switch layer.
|
||||||
|
// The router's mainLoop goroutine is responsible for managing all information related to the dht, searches, and crypto sessions.
|
||||||
type router struct {
|
type router struct {
|
||||||
core *Core
|
core *Core
|
||||||
addr address
|
addr address
|
||||||
@ -40,11 +42,12 @@ type router struct {
|
|||||||
admin chan func() // pass a lambda for the admin socket to query stuff
|
admin chan func() // pass a lambda for the admin socket to query stuff
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Initializes the router struct, which includes setting up channels to/from the tun/tap.
|
||||||
func (r *router) init(core *Core) {
|
func (r *router) init(core *Core) {
|
||||||
r.core = core
|
r.core = core
|
||||||
r.addr = *address_addrForNodeID(&r.core.dht.nodeID)
|
r.addr = *address_addrForNodeID(&r.core.dht.nodeID)
|
||||||
in := make(chan []byte, 32) // TODO something better than this...
|
in := make(chan []byte, 32) // TODO something better than this...
|
||||||
p := r.core.peers.newPeer(&r.core.boxPub, &r.core.sigPub) //, out, in)
|
p := r.core.peers.newPeer(&r.core.boxPub, &r.core.sigPub, &boxSharedKey{})
|
||||||
p.out = func(packet []byte) {
|
p.out = func(packet []byte) {
|
||||||
// This is to make very sure it never blocks
|
// This is to make very sure it never blocks
|
||||||
select {
|
select {
|
||||||
@ -55,7 +58,7 @@ func (r *router) init(core *Core) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
r.in = in
|
r.in = in
|
||||||
r.out = func(packet []byte) { p.handlePacket(packet, nil) } // The caller is responsible for go-ing if it needs to not block
|
r.out = func(packet []byte) { p.handlePacket(packet) } // The caller is responsible for go-ing if it needs to not block
|
||||||
recv := make(chan []byte, 32)
|
recv := make(chan []byte, 32)
|
||||||
send := make(chan []byte, 32)
|
send := make(chan []byte, 32)
|
||||||
r.recv = recv
|
r.recv = recv
|
||||||
@ -67,12 +70,17 @@ func (r *router) init(core *Core) {
|
|||||||
// go r.mainLoop()
|
// go r.mainLoop()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Starts the mainLoop goroutine.
|
||||||
func (r *router) start() error {
|
func (r *router) start() error {
|
||||||
r.core.log.Println("Starting router")
|
r.core.log.Println("Starting router")
|
||||||
go r.mainLoop()
|
go r.mainLoop()
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Takes traffic from the tun/tap and passes it to router.send, or from r.in and handles incoming traffic.
|
||||||
|
// Also adds new peer info to the DHT.
|
||||||
|
// Also resets the DHT and sesssions in the event of a coord change.
|
||||||
|
// Also does periodic maintenance stuff.
|
||||||
func (r *router) mainLoop() {
|
func (r *router) mainLoop() {
|
||||||
ticker := time.NewTicker(time.Second)
|
ticker := time.NewTicker(time.Second)
|
||||||
defer ticker.Stop()
|
defer ticker.Stop()
|
||||||
@ -91,6 +99,7 @@ func (r *router) mainLoop() {
|
|||||||
case <-ticker.C:
|
case <-ticker.C:
|
||||||
{
|
{
|
||||||
// Any periodic maintenance stuff goes here
|
// Any periodic maintenance stuff goes here
|
||||||
|
r.core.switchTable.doMaintenance()
|
||||||
r.core.dht.doMaintenance()
|
r.core.dht.doMaintenance()
|
||||||
util_getBytes() // To slowly drain things
|
util_getBytes() // To slowly drain things
|
||||||
}
|
}
|
||||||
@ -100,6 +109,11 @@ func (r *router) mainLoop() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Checks a packet's to/from address to make sure it's in the allowed range.
|
||||||
|
// If a session to the destination exists, gets the session and passes the packet to it.
|
||||||
|
// If no session exists, it triggers (or continues) a search.
|
||||||
|
// If the session hasn't responded recently, it triggers a ping or search to keep things alive or deal with broken coords *relatively* quickly.
|
||||||
|
// It also deals with oversized packets if there are MTU issues by calling into icmpv6.go to spoof PacketTooBig traffic, or DestinationUnreachable if the other side has their tun/tap disabled.
|
||||||
func (r *router) sendPacket(bs []byte) {
|
func (r *router) sendPacket(bs []byte) {
|
||||||
if len(bs) < 40 {
|
if len(bs) < 40 {
|
||||||
panic("Tried to send a packet shorter than a header...")
|
panic("Tried to send a packet shorter than a header...")
|
||||||
@ -224,9 +238,10 @@ func (r *router) sendPacket(bs []byte) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Called for incoming traffic by the session worker for that connection.
|
||||||
|
// Checks that the IP address is correct (matches the session) and passes the packet to the tun/tap.
|
||||||
func (r *router) recvPacket(bs []byte, theirAddr *address, theirSubnet *subnet) {
|
func (r *router) recvPacket(bs []byte, theirAddr *address, theirSubnet *subnet) {
|
||||||
// Note: called directly by the session worker, not the router goroutine
|
// Note: called directly by the session worker, not the router goroutine
|
||||||
//fmt.Println("Recv packet")
|
|
||||||
if len(bs) < 24 {
|
if len(bs) < 24 {
|
||||||
util_putBytes(bs)
|
util_putBytes(bs)
|
||||||
return
|
return
|
||||||
@ -246,6 +261,7 @@ func (r *router) recvPacket(bs []byte, theirAddr *address, theirSubnet *subnet)
|
|||||||
r.recv <- bs
|
r.recv <- bs
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Checks incoming traffic type and passes it to the appropriate handler.
|
||||||
func (r *router) handleIn(packet []byte) {
|
func (r *router) handleIn(packet []byte) {
|
||||||
pType, pTypeLen := wire_decode_uint64(packet)
|
pType, pTypeLen := wire_decode_uint64(packet)
|
||||||
if pTypeLen == 0 {
|
if pTypeLen == 0 {
|
||||||
@ -256,10 +272,12 @@ func (r *router) handleIn(packet []byte) {
|
|||||||
r.handleTraffic(packet)
|
r.handleTraffic(packet)
|
||||||
case wire_ProtocolTraffic:
|
case wire_ProtocolTraffic:
|
||||||
r.handleProto(packet)
|
r.handleProto(packet)
|
||||||
default: /*panic("Should not happen in testing") ;*/
|
default:
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Handles incoming traffic, i.e. encapuslated ordinary IPv6 packets.
|
||||||
|
// Passes them to the crypto session worker to be decrypted and sent to the tun/tap.
|
||||||
func (r *router) handleTraffic(packet []byte) {
|
func (r *router) handleTraffic(packet []byte) {
|
||||||
defer util_putBytes(packet)
|
defer util_putBytes(packet)
|
||||||
p := wire_trafficPacket{}
|
p := wire_trafficPacket{}
|
||||||
@ -270,10 +288,10 @@ func (r *router) handleTraffic(packet []byte) {
|
|||||||
if !isIn {
|
if !isIn {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
//go func () { sinfo.recv<-&p }()
|
|
||||||
sinfo.recv <- &p
|
sinfo.recv <- &p
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Handles protocol traffic by decrypting it, checking its type, and passing it to the appropriate handler for that traffic type.
|
||||||
func (r *router) handleProto(packet []byte) {
|
func (r *router) handleProto(packet []byte) {
|
||||||
// First parse the packet
|
// First parse the packet
|
||||||
p := wire_protoTrafficPacket{}
|
p := wire_protoTrafficPacket{}
|
||||||
@ -282,7 +300,6 @@ func (r *router) handleProto(packet []byte) {
|
|||||||
}
|
}
|
||||||
// Now try to open the payload
|
// Now try to open the payload
|
||||||
var sharedKey *boxSharedKey
|
var sharedKey *boxSharedKey
|
||||||
//var theirPermPub *boxPubKey
|
|
||||||
if p.ToKey == r.core.boxPub {
|
if p.ToKey == r.core.boxPub {
|
||||||
// Try to open using our permanent key
|
// Try to open using our permanent key
|
||||||
sharedKey = r.core.sessions.getSharedKey(&r.core.boxPriv, &p.FromKey)
|
sharedKey = r.core.sessions.getSharedKey(&r.core.boxPriv, &p.FromKey)
|
||||||
@ -300,7 +317,6 @@ func (r *router) handleProto(packet []byte) {
|
|||||||
if bsTypeLen == 0 {
|
if bsTypeLen == 0 {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
//fmt.Println("RECV bytes:", bs)
|
|
||||||
switch bsType {
|
switch bsType {
|
||||||
case wire_SessionPing:
|
case wire_SessionPing:
|
||||||
r.handlePing(bs, &p.FromKey)
|
r.handlePing(bs, &p.FromKey)
|
||||||
@ -310,15 +326,12 @@ func (r *router) handleProto(packet []byte) {
|
|||||||
r.handleDHTReq(bs, &p.FromKey)
|
r.handleDHTReq(bs, &p.FromKey)
|
||||||
case wire_DHTLookupResponse:
|
case wire_DHTLookupResponse:
|
||||||
r.handleDHTRes(bs, &p.FromKey)
|
r.handleDHTRes(bs, &p.FromKey)
|
||||||
case wire_SearchRequest:
|
default:
|
||||||
r.handleSearchReq(bs)
|
util_putBytes(packet)
|
||||||
case wire_SearchResponse:
|
|
||||||
r.handleSearchRes(bs)
|
|
||||||
default: /*panic("Should not happen in testing") ;*/
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Decodes session pings from wire format and passes them to sessions.handlePing where they either create or update a session.
|
||||||
func (r *router) handlePing(bs []byte, fromKey *boxPubKey) {
|
func (r *router) handlePing(bs []byte, fromKey *boxPubKey) {
|
||||||
ping := sessionPing{}
|
ping := sessionPing{}
|
||||||
if !ping.decode(bs) {
|
if !ping.decode(bs) {
|
||||||
@ -328,10 +341,12 @@ func (r *router) handlePing(bs []byte, fromKey *boxPubKey) {
|
|||||||
r.core.sessions.handlePing(&ping)
|
r.core.sessions.handlePing(&ping)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Handles session pongs (which are really pings with an extra flag to prevent acknowledgement).
|
||||||
func (r *router) handlePong(bs []byte, fromKey *boxPubKey) {
|
func (r *router) handlePong(bs []byte, fromKey *boxPubKey) {
|
||||||
r.handlePing(bs, fromKey)
|
r.handlePing(bs, fromKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Decodes dht requests and passes them to dht.handleReq to trigger a lookup/response.
|
||||||
func (r *router) handleDHTReq(bs []byte, fromKey *boxPubKey) {
|
func (r *router) handleDHTReq(bs []byte, fromKey *boxPubKey) {
|
||||||
req := dhtReq{}
|
req := dhtReq{}
|
||||||
if !req.decode(bs) {
|
if !req.decode(bs) {
|
||||||
@ -341,6 +356,7 @@ func (r *router) handleDHTReq(bs []byte, fromKey *boxPubKey) {
|
|||||||
r.core.dht.handleReq(&req)
|
r.core.dht.handleReq(&req)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Decodes dht responses and passes them to dht.handleRes to update the DHT table and further pass them to the search code (if applicable).
|
||||||
func (r *router) handleDHTRes(bs []byte, fromKey *boxPubKey) {
|
func (r *router) handleDHTRes(bs []byte, fromKey *boxPubKey) {
|
||||||
res := dhtRes{}
|
res := dhtRes{}
|
||||||
if !res.decode(bs) {
|
if !res.decode(bs) {
|
||||||
@ -350,22 +366,9 @@ func (r *router) handleDHTRes(bs []byte, fromKey *boxPubKey) {
|
|||||||
r.core.dht.handleRes(&res)
|
r.core.dht.handleRes(&res)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *router) handleSearchReq(bs []byte) {
|
// Passed a function to call.
|
||||||
req := searchReq{}
|
// This will send the function to r.admin and block until it finishes.
|
||||||
if !req.decode(bs) {
|
// It's used by the admin socket to ask the router mainLoop goroutine about information in the session or dht structs, which cannot be read safely from outside that goroutine.
|
||||||
return
|
|
||||||
}
|
|
||||||
r.core.searches.handleSearchReq(&req)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *router) handleSearchRes(bs []byte) {
|
|
||||||
res := searchRes{}
|
|
||||||
if !res.decode(bs) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
r.core.searches.handleSearchRes(&res)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *router) doAdmin(f func()) {
|
func (r *router) doAdmin(f func()) {
|
||||||
// Pass this a function that needs to be run by the router's main goroutine
|
// Pass this a function that needs to be run by the router's main goroutine
|
||||||
// It will pass the function to the router and wait for the router to finish
|
// It will pass the function to the router and wait for the router to finish
|
||||||
|
@ -11,14 +11,21 @@ package yggdrasil
|
|||||||
// A new search packet is sent immediately after receiving a response
|
// A new search packet is sent immediately after receiving a response
|
||||||
// A new search packet is sent periodically, once per second, in case a packet was dropped (this slowly causes the search to become parallel if the search doesn't timeout but also doesn't finish within 1 second for whatever reason)
|
// A new search packet is sent periodically, once per second, in case a packet was dropped (this slowly causes the search to become parallel if the search doesn't timeout but also doesn't finish within 1 second for whatever reason)
|
||||||
|
|
||||||
import "sort"
|
import (
|
||||||
import "time"
|
"sort"
|
||||||
|
"time"
|
||||||
//import "fmt"
|
)
|
||||||
|
|
||||||
|
// This defines the maximum number of dhtInfo that we keep track of for nodes to query in an ongoing search.
|
||||||
const search_MAX_SEARCH_SIZE = 16
|
const search_MAX_SEARCH_SIZE = 16
|
||||||
|
|
||||||
|
// This defines the time after which we send a new search packet.
|
||||||
|
// Search packets are sent automatically immediately after a response is received.
|
||||||
|
// So this allows for timeouts and for long searches to become increasingly parallel.
|
||||||
const search_RETRY_TIME = time.Second
|
const search_RETRY_TIME = time.Second
|
||||||
|
|
||||||
|
// Information about an ongoing search.
|
||||||
|
// Includes the targed NodeID, the bitmask to match it to an IP, and the list of nodes to visit / already visited.
|
||||||
type searchInfo struct {
|
type searchInfo struct {
|
||||||
dest NodeID
|
dest NodeID
|
||||||
mask NodeID
|
mask NodeID
|
||||||
@ -28,16 +35,19 @@ type searchInfo struct {
|
|||||||
visited map[NodeID]bool
|
visited map[NodeID]bool
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This stores a map of active searches.
|
||||||
type searches struct {
|
type searches struct {
|
||||||
core *Core
|
core *Core
|
||||||
searches map[NodeID]*searchInfo
|
searches map[NodeID]*searchInfo
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Intializes the searches struct.
|
||||||
func (s *searches) init(core *Core) {
|
func (s *searches) init(core *Core) {
|
||||||
s.core = core
|
s.core = core
|
||||||
s.searches = make(map[NodeID]*searchInfo)
|
s.searches = make(map[NodeID]*searchInfo)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Creates a new search info, adds it to the searches struct, and returns a pointer to the info.
|
||||||
func (s *searches) createSearch(dest *NodeID, mask *NodeID) *searchInfo {
|
func (s *searches) createSearch(dest *NodeID, mask *NodeID) *searchInfo {
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
for dest, sinfo := range s.searches {
|
for dest, sinfo := range s.searches {
|
||||||
@ -56,6 +66,9 @@ func (s *searches) createSearch(dest *NodeID, mask *NodeID) *searchInfo {
|
|||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
|
// Checks if there's an ongoing search relaed to a dhtRes.
|
||||||
|
// If there is, it adds the response info to the search and triggers a new search step.
|
||||||
|
// If there's no ongoing search, or we if the dhtRes finished the search (it was from the target node), then don't do anything more.
|
||||||
func (s *searches) handleDHTRes(res *dhtRes) {
|
func (s *searches) handleDHTRes(res *dhtRes) {
|
||||||
sinfo, isIn := s.searches[res.Dest]
|
sinfo, isIn := s.searches[res.Dest]
|
||||||
if !isIn || s.checkDHTRes(sinfo, res) {
|
if !isIn || s.checkDHTRes(sinfo, res) {
|
||||||
@ -68,6 +81,10 @@ func (s *searches) handleDHTRes(res *dhtRes) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Adds the information from a dhtRes to an ongoing search.
|
||||||
|
// Info about a node that has already been visited is not re-added to the search.
|
||||||
|
// Duplicate information about nodes toVisit is deduplicated (the newest information is kept).
|
||||||
|
// The toVisit list is sorted in ascending order of keyspace distance from the destination.
|
||||||
func (s *searches) addToSearch(sinfo *searchInfo, res *dhtRes) {
|
func (s *searches) addToSearch(sinfo *searchInfo, res *dhtRes) {
|
||||||
// Add responses to toVisit if closer to dest than the res node
|
// Add responses to toVisit if closer to dest than the res node
|
||||||
from := dhtInfo{key: res.Key, coords: res.Coords}
|
from := dhtInfo{key: res.Key, coords: res.Coords}
|
||||||
@ -98,6 +115,8 @@ func (s *searches) addToSearch(sinfo *searchInfo, res *dhtRes) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If there are no nodes left toVisit, then this cleans up the search.
|
||||||
|
// Otherwise, it pops the closest node to the destination (in keyspace) off of the toVisit list and sends a dht ping.
|
||||||
func (s *searches) doSearchStep(sinfo *searchInfo) {
|
func (s *searches) doSearchStep(sinfo *searchInfo) {
|
||||||
if len(sinfo.toVisit) == 0 {
|
if len(sinfo.toVisit) == 0 {
|
||||||
// Dead end, do cleanup
|
// Dead end, do cleanup
|
||||||
@ -107,11 +126,16 @@ func (s *searches) doSearchStep(sinfo *searchInfo) {
|
|||||||
// Send to the next search target
|
// Send to the next search target
|
||||||
var next *dhtInfo
|
var next *dhtInfo
|
||||||
next, sinfo.toVisit = sinfo.toVisit[0], sinfo.toVisit[1:]
|
next, sinfo.toVisit = sinfo.toVisit[0], sinfo.toVisit[1:]
|
||||||
|
var oldPings int
|
||||||
|
oldPings, next.pings = next.pings, 0
|
||||||
s.core.dht.ping(next, &sinfo.dest)
|
s.core.dht.ping(next, &sinfo.dest)
|
||||||
|
next.pings = oldPings // Don't evict a node for searching with it too much
|
||||||
sinfo.visited[*next.getNodeID()] = true
|
sinfo.visited[*next.getNodeID()] = true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If we've recenty sent a ping for this search, do nothing.
|
||||||
|
// Otherwise, doSearchStep and schedule another continueSearch to happen after search_RETRY_TIME.
|
||||||
func (s *searches) continueSearch(sinfo *searchInfo) {
|
func (s *searches) continueSearch(sinfo *searchInfo) {
|
||||||
if time.Since(sinfo.time) < search_RETRY_TIME {
|
if time.Since(sinfo.time) < search_RETRY_TIME {
|
||||||
return
|
return
|
||||||
@ -134,6 +158,7 @@ func (s *searches) continueSearch(sinfo *searchInfo) {
|
|||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Calls create search, and initializes the iterative search parts of the struct before returning it.
|
||||||
func (s *searches) newIterSearch(dest *NodeID, mask *NodeID) *searchInfo {
|
func (s *searches) newIterSearch(dest *NodeID, mask *NodeID) *searchInfo {
|
||||||
sinfo := s.createSearch(dest, mask)
|
sinfo := s.createSearch(dest, mask)
|
||||||
sinfo.toVisit = s.core.dht.lookup(dest, true)
|
sinfo.toVisit = s.core.dht.lookup(dest, true)
|
||||||
@ -141,6 +166,9 @@ func (s *searches) newIterSearch(dest *NodeID, mask *NodeID) *searchInfo {
|
|||||||
return sinfo
|
return sinfo
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Checks if a dhtRes is good (called by handleDHTRes).
|
||||||
|
// If the response is from the target, get/create a session, trigger a session ping, and return true.
|
||||||
|
// Otherwise return false.
|
||||||
func (s *searches) checkDHTRes(info *searchInfo, res *dhtRes) bool {
|
func (s *searches) checkDHTRes(info *searchInfo, res *dhtRes) bool {
|
||||||
them := getNodeID(&res.Key)
|
them := getNodeID(&res.Key)
|
||||||
var destMasked NodeID
|
var destMasked NodeID
|
||||||
@ -169,127 +197,3 @@ func (s *searches) checkDHTRes(info *searchInfo, res *dhtRes) bool {
|
|||||||
delete(s.searches, res.Dest)
|
delete(s.searches, res.Dest)
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
|
||||||
|
|
||||||
type searchReq struct {
|
|
||||||
key boxPubKey // Who I am
|
|
||||||
coords []byte // Where I am
|
|
||||||
dest NodeID // Who I'm trying to connect to
|
|
||||||
}
|
|
||||||
|
|
||||||
type searchRes struct {
|
|
||||||
key boxPubKey // Who I am
|
|
||||||
coords []byte // Where I am
|
|
||||||
dest NodeID // Who I was asked about
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *searches) sendSearch(info *searchInfo) {
|
|
||||||
now := time.Now()
|
|
||||||
if now.Sub(info.time) < time.Second {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
loc := s.core.switchTable.getLocator()
|
|
||||||
coords := loc.getCoords()
|
|
||||||
req := searchReq{
|
|
||||||
key: s.core.boxPub,
|
|
||||||
coords: coords,
|
|
||||||
dest: info.dest,
|
|
||||||
}
|
|
||||||
info.time = time.Now()
|
|
||||||
s.handleSearchReq(&req)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *searches) handleSearchReq(req *searchReq) {
|
|
||||||
lookup := s.core.dht.lookup(&req.dest, false)
|
|
||||||
sent := false
|
|
||||||
//fmt.Println("DEBUG len:", len(lookup))
|
|
||||||
for _, info := range lookup {
|
|
||||||
//fmt.Println("DEBUG lup:", info.getNodeID())
|
|
||||||
if dht_firstCloserThanThird(info.getNodeID(),
|
|
||||||
&req.dest,
|
|
||||||
&s.core.dht.nodeID) {
|
|
||||||
s.forwardSearch(req, info)
|
|
||||||
sent = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !sent {
|
|
||||||
s.sendSearchRes(req)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *searches) forwardSearch(req *searchReq, next *dhtInfo) {
|
|
||||||
//fmt.Println("DEBUG fwd:", req.dest, next.getNodeID())
|
|
||||||
bs := req.encode()
|
|
||||||
shared := s.core.sessions.getSharedKey(&s.core.boxPriv, &next.key)
|
|
||||||
payload, nonce := boxSeal(shared, bs, nil)
|
|
||||||
p := wire_protoTrafficPacket{
|
|
||||||
TTL: ^uint64(0),
|
|
||||||
Coords: next.coords,
|
|
||||||
ToKey: next.key,
|
|
||||||
FromKey: s.core.boxPub,
|
|
||||||
Nonce: *nonce,
|
|
||||||
Payload: payload,
|
|
||||||
}
|
|
||||||
packet := p.encode()
|
|
||||||
s.core.router.out(packet)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *searches) sendSearchRes(req *searchReq) {
|
|
||||||
//fmt.Println("DEBUG res:", req.dest, s.core.dht.nodeID)
|
|
||||||
loc := s.core.switchTable.getLocator()
|
|
||||||
coords := loc.getCoords()
|
|
||||||
res := searchRes{
|
|
||||||
key: s.core.boxPub,
|
|
||||||
coords: coords,
|
|
||||||
dest: req.dest,
|
|
||||||
}
|
|
||||||
bs := res.encode()
|
|
||||||
shared := s.core.sessions.getSharedKey(&s.core.boxPriv, &req.key)
|
|
||||||
payload, nonce := boxSeal(shared, bs, nil)
|
|
||||||
p := wire_protoTrafficPacket{
|
|
||||||
TTL: ^uint64(0),
|
|
||||||
Coords: req.coords,
|
|
||||||
ToKey: req.key,
|
|
||||||
FromKey: s.core.boxPub,
|
|
||||||
Nonce: *nonce,
|
|
||||||
Payload: payload,
|
|
||||||
}
|
|
||||||
packet := p.encode()
|
|
||||||
s.core.router.out(packet)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *searches) handleSearchRes(res *searchRes) {
|
|
||||||
info, isIn := s.searches[res.dest]
|
|
||||||
if !isIn {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
them := getNodeID(&res.key)
|
|
||||||
var destMasked NodeID
|
|
||||||
var themMasked NodeID
|
|
||||||
for idx := 0; idx < NodeIDLen; idx++ {
|
|
||||||
destMasked[idx] = info.dest[idx] & info.mask[idx]
|
|
||||||
themMasked[idx] = them[idx] & info.mask[idx]
|
|
||||||
}
|
|
||||||
//fmt.Println("DEBUG search res1:", themMasked, destMasked)
|
|
||||||
//fmt.Println("DEBUG search res2:", *them, *info.dest, *info.mask)
|
|
||||||
if themMasked != destMasked {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
// They match, so create a session and send a sessionRequest
|
|
||||||
sinfo, isIn := s.core.sessions.getByTheirPerm(&res.key)
|
|
||||||
if !isIn {
|
|
||||||
sinfo = s.core.sessions.createSession(&res.key)
|
|
||||||
_, isIn := s.core.sessions.getByTheirPerm(&res.key)
|
|
||||||
if !isIn {
|
|
||||||
panic("This should never happen")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// FIXME (!) replay attacks could mess with coords? Give it a handle (tstamp)?
|
|
||||||
sinfo.coords = res.coords
|
|
||||||
sinfo.packet = info.packet
|
|
||||||
s.core.sessions.ping(sinfo)
|
|
||||||
// Cleanup
|
|
||||||
delete(s.searches, res.dest)
|
|
||||||
}
|
|
||||||
|
@ -6,6 +6,8 @@ package yggdrasil
|
|||||||
|
|
||||||
import "time"
|
import "time"
|
||||||
|
|
||||||
|
// All the information we know about an active session.
|
||||||
|
// This includes coords, permanent and ephemeral keys, handles and nonces, various sorts of timing information for timeout and maintenance, and some metadata for the admin API.
|
||||||
type sessionInfo struct {
|
type sessionInfo struct {
|
||||||
core *Core
|
core *Core
|
||||||
theirAddr address
|
theirAddr address
|
||||||
@ -37,6 +39,7 @@ type sessionInfo struct {
|
|||||||
bytesRecvd uint64 // Bytes of real traffic received in this session
|
bytesRecvd uint64 // Bytes of real traffic received in this session
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Represents a session ping/pong packet, andincludes information like public keys, a session handle, coords, a timestamp to prevent replays, and the tun/tap MTU.
|
||||||
type sessionPing struct {
|
type sessionPing struct {
|
||||||
SendPermPub boxPubKey // Sender's permanent key
|
SendPermPub boxPubKey // Sender's permanent key
|
||||||
Handle handle // Random number to ID session
|
Handle handle // Random number to ID session
|
||||||
@ -47,7 +50,8 @@ type sessionPing struct {
|
|||||||
MTU uint16
|
MTU uint16
|
||||||
}
|
}
|
||||||
|
|
||||||
// Returns true if the session was updated, false otherwise
|
// Updates session info in response to a ping, after checking that the ping is OK.
|
||||||
|
// Returns true if the session was updated, or false otherwise.
|
||||||
func (s *sessionInfo) update(p *sessionPing) bool {
|
func (s *sessionInfo) update(p *sessionPing) bool {
|
||||||
if !(p.Tstamp > s.tstamp) {
|
if !(p.Tstamp > s.tstamp) {
|
||||||
// To protect against replay attacks
|
// To protect against replay attacks
|
||||||
@ -76,10 +80,14 @@ func (s *sessionInfo) update(p *sessionPing) bool {
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns true if the session has been idle for longer than the allowed timeout.
|
||||||
func (s *sessionInfo) timedout() bool {
|
func (s *sessionInfo) timedout() bool {
|
||||||
return time.Since(s.time) > time.Minute
|
return time.Since(s.time) > time.Minute
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Struct of all active sessions.
|
||||||
|
// Sessions are indexed by handle.
|
||||||
|
// Additionally, stores maps of address/subnet onto keys, and keys onto handles.
|
||||||
type sessions struct {
|
type sessions struct {
|
||||||
core *Core
|
core *Core
|
||||||
// Maps known permanent keys to their shared key, used by DHT a lot
|
// Maps known permanent keys to their shared key, used by DHT a lot
|
||||||
@ -94,6 +102,7 @@ type sessions struct {
|
|||||||
subnetToPerm map[subnet]*boxPubKey
|
subnetToPerm map[subnet]*boxPubKey
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Initializes the session struct.
|
||||||
func (ss *sessions) init(core *Core) {
|
func (ss *sessions) init(core *Core) {
|
||||||
ss.core = core
|
ss.core = core
|
||||||
ss.permShared = make(map[boxPubKey]*boxSharedKey)
|
ss.permShared = make(map[boxPubKey]*boxSharedKey)
|
||||||
@ -104,6 +113,7 @@ func (ss *sessions) init(core *Core) {
|
|||||||
ss.subnetToPerm = make(map[subnet]*boxPubKey)
|
ss.subnetToPerm = make(map[subnet]*boxPubKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets the session corresponding to a given handle.
|
||||||
func (ss *sessions) getSessionForHandle(handle *handle) (*sessionInfo, bool) {
|
func (ss *sessions) getSessionForHandle(handle *handle) (*sessionInfo, bool) {
|
||||||
sinfo, isIn := ss.sinfos[*handle]
|
sinfo, isIn := ss.sinfos[*handle]
|
||||||
if isIn && sinfo.timedout() {
|
if isIn && sinfo.timedout() {
|
||||||
@ -113,6 +123,7 @@ func (ss *sessions) getSessionForHandle(handle *handle) (*sessionInfo, bool) {
|
|||||||
return sinfo, isIn
|
return sinfo, isIn
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets a session corresponding to an ephemeral session key used by this node.
|
||||||
func (ss *sessions) getByMySes(key *boxPubKey) (*sessionInfo, bool) {
|
func (ss *sessions) getByMySes(key *boxPubKey) (*sessionInfo, bool) {
|
||||||
h, isIn := ss.byMySes[*key]
|
h, isIn := ss.byMySes[*key]
|
||||||
if !isIn {
|
if !isIn {
|
||||||
@ -122,6 +133,7 @@ func (ss *sessions) getByMySes(key *boxPubKey) (*sessionInfo, bool) {
|
|||||||
return sinfo, isIn
|
return sinfo, isIn
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets a session corresponding to a permanent key used by the remote node.
|
||||||
func (ss *sessions) getByTheirPerm(key *boxPubKey) (*sessionInfo, bool) {
|
func (ss *sessions) getByTheirPerm(key *boxPubKey) (*sessionInfo, bool) {
|
||||||
h, isIn := ss.byTheirPerm[*key]
|
h, isIn := ss.byTheirPerm[*key]
|
||||||
if !isIn {
|
if !isIn {
|
||||||
@ -131,6 +143,7 @@ func (ss *sessions) getByTheirPerm(key *boxPubKey) (*sessionInfo, bool) {
|
|||||||
return sinfo, isIn
|
return sinfo, isIn
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets a session corresponding to an IPv6 address used by the remote node.
|
||||||
func (ss *sessions) getByTheirAddr(addr *address) (*sessionInfo, bool) {
|
func (ss *sessions) getByTheirAddr(addr *address) (*sessionInfo, bool) {
|
||||||
p, isIn := ss.addrToPerm[*addr]
|
p, isIn := ss.addrToPerm[*addr]
|
||||||
if !isIn {
|
if !isIn {
|
||||||
@ -140,6 +153,7 @@ func (ss *sessions) getByTheirAddr(addr *address) (*sessionInfo, bool) {
|
|||||||
return sinfo, isIn
|
return sinfo, isIn
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets a session corresponding to an IPv6 /64 subnet used by the remote node/network.
|
||||||
func (ss *sessions) getByTheirSubnet(snet *subnet) (*sessionInfo, bool) {
|
func (ss *sessions) getByTheirSubnet(snet *subnet) (*sessionInfo, bool) {
|
||||||
p, isIn := ss.subnetToPerm[*snet]
|
p, isIn := ss.subnetToPerm[*snet]
|
||||||
if !isIn {
|
if !isIn {
|
||||||
@ -149,6 +163,8 @@ func (ss *sessions) getByTheirSubnet(snet *subnet) (*sessionInfo, bool) {
|
|||||||
return sinfo, isIn
|
return sinfo, isIn
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Creates a new session and lazily cleans up old/timedout existing sessions.
|
||||||
|
// This includse initializing session info to sane defaults (e.g. lowest supported MTU).
|
||||||
func (ss *sessions) createSession(theirPermKey *boxPubKey) *sessionInfo {
|
func (ss *sessions) createSession(theirPermKey *boxPubKey) *sessionInfo {
|
||||||
sinfo := sessionInfo{}
|
sinfo := sessionInfo{}
|
||||||
sinfo.core = ss.core
|
sinfo.core = ss.core
|
||||||
@ -201,6 +217,7 @@ func (ss *sessions) createSession(theirPermKey *boxPubKey) *sessionInfo {
|
|||||||
return &sinfo
|
return &sinfo
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Closes a session, removing it from sessions maps and killing the worker goroutine.
|
||||||
func (sinfo *sessionInfo) close() {
|
func (sinfo *sessionInfo) close() {
|
||||||
delete(sinfo.core.sessions.sinfos, sinfo.myHandle)
|
delete(sinfo.core.sessions.sinfos, sinfo.myHandle)
|
||||||
delete(sinfo.core.sessions.byMySes, sinfo.mySesPub)
|
delete(sinfo.core.sessions.byMySes, sinfo.mySesPub)
|
||||||
@ -211,6 +228,7 @@ func (sinfo *sessionInfo) close() {
|
|||||||
close(sinfo.recv)
|
close(sinfo.recv)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns a session ping appropriate for the given session info.
|
||||||
func (ss *sessions) getPing(sinfo *sessionInfo) sessionPing {
|
func (ss *sessions) getPing(sinfo *sessionInfo) sessionPing {
|
||||||
loc := ss.core.switchTable.getLocator()
|
loc := ss.core.switchTable.getLocator()
|
||||||
coords := loc.getCoords()
|
coords := loc.getCoords()
|
||||||
@ -226,6 +244,9 @@ func (ss *sessions) getPing(sinfo *sessionInfo) sessionPing {
|
|||||||
return ref
|
return ref
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets the shared key for a pair of box keys.
|
||||||
|
// Used to cache recently used shared keys for protocol traffic.
|
||||||
|
// This comes up with dht req/res and session ping/pong traffic.
|
||||||
func (ss *sessions) getSharedKey(myPriv *boxPrivKey,
|
func (ss *sessions) getSharedKey(myPriv *boxPrivKey,
|
||||||
theirPub *boxPubKey) *boxSharedKey {
|
theirPub *boxPubKey) *boxSharedKey {
|
||||||
if skey, isIn := ss.permShared[*theirPub]; isIn {
|
if skey, isIn := ss.permShared[*theirPub]; isIn {
|
||||||
@ -244,10 +265,13 @@ func (ss *sessions) getSharedKey(myPriv *boxPrivKey,
|
|||||||
return ss.permShared[*theirPub]
|
return ss.permShared[*theirPub]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Sends a session ping by calling sendPingPong in ping mode.
|
||||||
func (ss *sessions) ping(sinfo *sessionInfo) {
|
func (ss *sessions) ping(sinfo *sessionInfo) {
|
||||||
ss.sendPingPong(sinfo, false)
|
ss.sendPingPong(sinfo, false)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Calls getPing, sets the appropriate ping/pong flag, encodes to wire format, and send it.
|
||||||
|
// Updates the time the last ping was sent in the session info.
|
||||||
func (ss *sessions) sendPingPong(sinfo *sessionInfo, isPong bool) {
|
func (ss *sessions) sendPingPong(sinfo *sessionInfo, isPong bool) {
|
||||||
ping := ss.getPing(sinfo)
|
ping := ss.getPing(sinfo)
|
||||||
ping.IsPong = isPong
|
ping.IsPong = isPong
|
||||||
@ -255,7 +279,6 @@ func (ss *sessions) sendPingPong(sinfo *sessionInfo, isPong bool) {
|
|||||||
shared := ss.getSharedKey(&ss.core.boxPriv, &sinfo.theirPermPub)
|
shared := ss.getSharedKey(&ss.core.boxPriv, &sinfo.theirPermPub)
|
||||||
payload, nonce := boxSeal(shared, bs, nil)
|
payload, nonce := boxSeal(shared, bs, nil)
|
||||||
p := wire_protoTrafficPacket{
|
p := wire_protoTrafficPacket{
|
||||||
TTL: ^uint64(0),
|
|
||||||
Coords: sinfo.coords,
|
Coords: sinfo.coords,
|
||||||
ToKey: sinfo.theirPermPub,
|
ToKey: sinfo.theirPermPub,
|
||||||
FromKey: ss.core.boxPub,
|
FromKey: ss.core.boxPub,
|
||||||
@ -269,6 +292,8 @@ func (ss *sessions) sendPingPong(sinfo *sessionInfo, isPong bool) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Handles a session ping, creating a session if needed and calling update, then possibly responding with a pong if the ping was in ping mode and the update was successful.
|
||||||
|
// If the session has a packet cached (common when first setting up a session), it will be sent.
|
||||||
func (ss *sessions) handlePing(ping *sessionPing) {
|
func (ss *sessions) handlePing(ping *sessionPing) {
|
||||||
// Get the corresponding session (or create a new session)
|
// Get the corresponding session (or create a new session)
|
||||||
sinfo, isIn := ss.getByTheirPerm(&ping.SendPermPub)
|
sinfo, isIn := ss.getByTheirPerm(&ping.SendPermPub)
|
||||||
@ -297,6 +322,9 @@ func (ss *sessions) handlePing(ping *sessionPing) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Used to subtract one nonce from another, staying in the range +- 64.
|
||||||
|
// This is used by the nonce progression machinery to advance the bitmask of recently received packets (indexed by nonce), or to check the appropriate bit of the bitmask.
|
||||||
|
// It's basically part of the machinery that prevents replays and duplicate packets.
|
||||||
func (n *boxNonce) minus(m *boxNonce) int64 {
|
func (n *boxNonce) minus(m *boxNonce) int64 {
|
||||||
diff := int64(0)
|
diff := int64(0)
|
||||||
for idx := range n {
|
for idx := range n {
|
||||||
@ -312,6 +340,9 @@ func (n *boxNonce) minus(m *boxNonce) int64 {
|
|||||||
return diff
|
return diff
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Get the MTU of the session.
|
||||||
|
// Will be equal to the smaller of this node's MTU or the remote node's MTU.
|
||||||
|
// If sending over links with a maximum message size (this was a thing with the old UDP code), it could be further lowered, to a minimum of 1280.
|
||||||
func (sinfo *sessionInfo) getMTU() uint16 {
|
func (sinfo *sessionInfo) getMTU() uint16 {
|
||||||
if sinfo.theirMTU == 0 || sinfo.myMTU == 0 {
|
if sinfo.theirMTU == 0 || sinfo.myMTU == 0 {
|
||||||
return 0
|
return 0
|
||||||
@ -322,6 +353,7 @@ func (sinfo *sessionInfo) getMTU() uint16 {
|
|||||||
return sinfo.myMTU
|
return sinfo.myMTU
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Checks if a packet's nonce is recent enough to fall within the window of allowed packets, and not already received.
|
||||||
func (sinfo *sessionInfo) nonceIsOK(theirNonce *boxNonce) bool {
|
func (sinfo *sessionInfo) nonceIsOK(theirNonce *boxNonce) bool {
|
||||||
// The bitmask is to allow for some non-duplicate out-of-order packets
|
// The bitmask is to allow for some non-duplicate out-of-order packets
|
||||||
diff := theirNonce.minus(&sinfo.theirNonce)
|
diff := theirNonce.minus(&sinfo.theirNonce)
|
||||||
@ -331,19 +363,24 @@ func (sinfo *sessionInfo) nonceIsOK(theirNonce *boxNonce) bool {
|
|||||||
return ^sinfo.nonceMask&(0x01<<uint64(-diff)) != 0
|
return ^sinfo.nonceMask&(0x01<<uint64(-diff)) != 0
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Updates the nonce mask by (possibly) shifting the bitmask and setting the bit corresponding to this nonce to 1, and then updating the most recent nonce
|
||||||
func (sinfo *sessionInfo) updateNonce(theirNonce *boxNonce) {
|
func (sinfo *sessionInfo) updateNonce(theirNonce *boxNonce) {
|
||||||
// Shift nonce mask if needed
|
// Shift nonce mask if needed
|
||||||
// Set bit
|
// Set bit
|
||||||
diff := theirNonce.minus(&sinfo.theirNonce)
|
diff := theirNonce.minus(&sinfo.theirNonce)
|
||||||
if diff > 0 {
|
if diff > 0 {
|
||||||
|
// This nonce is newer, so shift the window before setting the bit, and update theirNonce in the session info.
|
||||||
sinfo.nonceMask <<= uint64(diff)
|
sinfo.nonceMask <<= uint64(diff)
|
||||||
sinfo.nonceMask &= 0x01
|
sinfo.nonceMask &= 0x01
|
||||||
|
sinfo.theirNonce = *theirNonce
|
||||||
} else {
|
} else {
|
||||||
|
// This nonce is older, so set the bit but do not shift the window.
|
||||||
sinfo.nonceMask &= 0x01 << uint64(-diff)
|
sinfo.nonceMask &= 0x01 << uint64(-diff)
|
||||||
}
|
}
|
||||||
sinfo.theirNonce = *theirNonce
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Resets all sessions to an uninitialized state.
|
||||||
|
// Called after coord changes, so attemtps to use a session will trigger a new ping and notify the remote end of the coord change.
|
||||||
func (ss *sessions) resetInits() {
|
func (ss *sessions) resetInits() {
|
||||||
for _, sinfo := range ss.sinfos {
|
for _, sinfo := range ss.sinfos {
|
||||||
sinfo.init = false
|
sinfo.init = false
|
||||||
@ -352,10 +389,9 @@ func (ss *sessions) resetInits() {
|
|||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
// This is for a per-session worker
|
// This is for a per-session worker.
|
||||||
// It handles calling the relatively expensive crypto operations
|
// It handles calling the relatively expensive crypto operations.
|
||||||
// It's also responsible for keeping nonces consistent
|
// It's also responsible for checking nonces and dropping out-of-date/duplicate packets, or else calling the function to update nonces if the packet is OK.
|
||||||
|
|
||||||
func (sinfo *sessionInfo) doWorker() {
|
func (sinfo *sessionInfo) doWorker() {
|
||||||
for {
|
for {
|
||||||
select {
|
select {
|
||||||
@ -375,6 +411,7 @@ func (sinfo *sessionInfo) doWorker() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This encrypts a packet, creates a trafficPacket struct, encodes it, and sends it to router.out to pass it to the switch layer.
|
||||||
func (sinfo *sessionInfo) doSend(bs []byte) {
|
func (sinfo *sessionInfo) doSend(bs []byte) {
|
||||||
defer util_putBytes(bs)
|
defer util_putBytes(bs)
|
||||||
if !sinfo.init {
|
if !sinfo.init {
|
||||||
@ -383,7 +420,6 @@ func (sinfo *sessionInfo) doSend(bs []byte) {
|
|||||||
payload, nonce := boxSeal(&sinfo.sharedSesKey, bs, &sinfo.myNonce)
|
payload, nonce := boxSeal(&sinfo.sharedSesKey, bs, &sinfo.myNonce)
|
||||||
defer util_putBytes(payload)
|
defer util_putBytes(payload)
|
||||||
p := wire_trafficPacket{
|
p := wire_trafficPacket{
|
||||||
TTL: ^uint64(0),
|
|
||||||
Coords: sinfo.coords,
|
Coords: sinfo.coords,
|
||||||
Handle: sinfo.theirHandle,
|
Handle: sinfo.theirHandle,
|
||||||
Nonce: *nonce,
|
Nonce: *nonce,
|
||||||
@ -394,6 +430,11 @@ func (sinfo *sessionInfo) doSend(bs []byte) {
|
|||||||
sinfo.core.router.out(packet)
|
sinfo.core.router.out(packet)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This takes a trafficPacket and checks the nonce.
|
||||||
|
// If the nonce is OK, it decrypts the packet.
|
||||||
|
// If the decrypted packet is OK, it calls router.recvPacket to pass the packet to the tun/tap.
|
||||||
|
// If a packet does not decrypt successfully, it assumes the packet was truncated, and updates the MTU accordingly.
|
||||||
|
// TODO? remove the MTU updating part? That should never happen with TCP peers, and the old UDP code that caused it was removed (and if replaced, should be replaced with something that can reliably send messages with an arbitrary size).
|
||||||
func (sinfo *sessionInfo) doRecv(p *wire_trafficPacket) {
|
func (sinfo *sessionInfo) doRecv(p *wire_trafficPacket) {
|
||||||
defer util_putBytes(p.Payload)
|
defer util_putBytes(p.Payload)
|
||||||
payloadSize := uint16(len(p.Payload))
|
payloadSize := uint16(len(p.Payload))
|
||||||
@ -415,7 +456,6 @@ func (sinfo *sessionInfo) doRecv(p *wire_trafficPacket) {
|
|||||||
}
|
}
|
||||||
if newMTU < sinfo.myMTU {
|
if newMTU < sinfo.myMTU {
|
||||||
sinfo.myMTU = newMTU
|
sinfo.myMTU = newMTU
|
||||||
//sinfo.core.log.Println("DEBUG set MTU to:", sinfo.myMTU)
|
|
||||||
sinfo.core.sessions.sendPingPong(sinfo, false)
|
sinfo.core.sessions.sendPingPong(sinfo, false)
|
||||||
sinfo.mtuTime = time.Now()
|
sinfo.mtuTime = time.Now()
|
||||||
sinfo.wasMTUFixed = true
|
sinfo.wasMTUFixed = true
|
||||||
@ -429,7 +469,6 @@ func (sinfo *sessionInfo) doRecv(p *wire_trafficPacket) {
|
|||||||
if time.Since(sinfo.mtuTime) > time.Minute {
|
if time.Since(sinfo.mtuTime) > time.Minute {
|
||||||
sinfo.myMTU = uint16(sinfo.core.tun.mtu)
|
sinfo.myMTU = uint16(sinfo.core.tun.mtu)
|
||||||
sinfo.mtuTime = time.Now()
|
sinfo.mtuTime = time.Now()
|
||||||
//sinfo.core.log.Println("DEBUG: Reset MTU to:", sinfo.myMTU)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
go func() { sinfo.core.router.admin <- fixSessionMTU }()
|
go func() { sinfo.core.router.admin <- fixSessionMTU }()
|
||||||
|
@ -3,43 +3,57 @@ package yggdrasil
|
|||||||
// This is where we record which signatures we've previously checked
|
// This is where we record which signatures we've previously checked
|
||||||
// It's so we can avoid needlessly checking them again
|
// It's so we can avoid needlessly checking them again
|
||||||
|
|
||||||
import "sync"
|
import (
|
||||||
import "time"
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// This keeps track of what signatures have already been checked.
|
||||||
|
// It's used to skip expensive crypto operations, given that many signatures are likely to be the same for the average node's peers.
|
||||||
type sigManager struct {
|
type sigManager struct {
|
||||||
mutex sync.RWMutex
|
mutex sync.RWMutex
|
||||||
checked map[sigBytes]knownSig
|
checked map[sigBytes]knownSig
|
||||||
lastCleaned time.Time
|
lastCleaned time.Time
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Represents a known signature.
|
||||||
|
// Includes the key, the signature bytes, the bytes that were signed, and the time it was last used.
|
||||||
type knownSig struct {
|
type knownSig struct {
|
||||||
|
key sigPubKey
|
||||||
|
sig sigBytes
|
||||||
bs []byte
|
bs []byte
|
||||||
time time.Time
|
time time.Time
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Initializes the signature manager.
|
||||||
func (m *sigManager) init() {
|
func (m *sigManager) init() {
|
||||||
m.checked = make(map[sigBytes]knownSig)
|
m.checked = make(map[sigBytes]knownSig)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Checks if a key and signature match the supplied bytes.
|
||||||
|
// If the same key/sig/bytes have been checked before, it returns true from the cached results.
|
||||||
|
// If not, it checks the key, updates it in the cache if successful, and returns the checked results.
|
||||||
func (m *sigManager) check(key *sigPubKey, sig *sigBytes, bs []byte) bool {
|
func (m *sigManager) check(key *sigPubKey, sig *sigBytes, bs []byte) bool {
|
||||||
if m.isChecked(sig, bs) {
|
if m.isChecked(key, sig, bs) {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
verified := verify(key, bs, sig)
|
verified := verify(key, bs, sig)
|
||||||
if verified {
|
if verified {
|
||||||
m.putChecked(sig, bs)
|
m.putChecked(key, sig, bs)
|
||||||
}
|
}
|
||||||
return verified
|
return verified
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m *sigManager) isChecked(sig *sigBytes, bs []byte) bool {
|
// Checks the cache to see if this key/sig/bytes combination has already been verified.
|
||||||
|
// Returns true if it finds a match.
|
||||||
|
func (m *sigManager) isChecked(key *sigPubKey, sig *sigBytes, bs []byte) bool {
|
||||||
m.mutex.RLock()
|
m.mutex.RLock()
|
||||||
defer m.mutex.RUnlock()
|
defer m.mutex.RUnlock()
|
||||||
k, isIn := m.checked[*sig]
|
k, isIn := m.checked[*sig]
|
||||||
if !isIn {
|
if !isIn {
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
if len(bs) != len(k.bs) {
|
if k.key != *key || k.sig != *sig || len(bs) != len(k.bs) {
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
for idx := 0; idx < len(bs); idx++ {
|
for idx := 0; idx < len(bs); idx++ {
|
||||||
@ -51,7 +65,10 @@ func (m *sigManager) isChecked(sig *sigBytes, bs []byte) bool {
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m *sigManager) putChecked(newsig *sigBytes, bs []byte) {
|
// Puts a new result into the cache.
|
||||||
|
// This result is then used by isChecked to skip the expensive crypto verification if it's needed again.
|
||||||
|
// This is useful because, for nodes with multiple peers, there is often a lot of overlap between the signatures provided by each peer.
|
||||||
|
func (m *sigManager) putChecked(key *sigPubKey, newsig *sigBytes, bs []byte) {
|
||||||
m.mutex.Lock()
|
m.mutex.Lock()
|
||||||
defer m.mutex.Unlock()
|
defer m.mutex.Unlock()
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
@ -64,6 +81,6 @@ func (m *sigManager) putChecked(newsig *sigBytes, bs []byte) {
|
|||||||
}
|
}
|
||||||
m.lastCleaned = now
|
m.lastCleaned = now
|
||||||
}
|
}
|
||||||
k := knownSig{bs: bs, time: now}
|
k := knownSig{key: *key, sig: *newsig, bs: bs, time: now}
|
||||||
m.checked[*newsig] = k
|
m.checked[*newsig] = k
|
||||||
}
|
}
|
||||||
|
@ -9,27 +9,30 @@ package yggdrasil
|
|||||||
// TODO document/comment everything in a lot more detail
|
// TODO document/comment everything in a lot more detail
|
||||||
|
|
||||||
// TODO? use a pre-computed lookup table (python version had this)
|
// TODO? use a pre-computed lookup table (python version had this)
|
||||||
// A little annoying to do with constant changes from bandwidth estimates
|
// A little annoying to do with constant changes from backpressure
|
||||||
|
|
||||||
import "time"
|
import (
|
||||||
import "sync"
|
"sort"
|
||||||
import "sync/atomic"
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
//import "fmt"
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
const switch_timeout = time.Minute
|
const switch_timeout = time.Minute
|
||||||
const switch_updateInterval = switch_timeout / 2
|
const switch_updateInterval = switch_timeout / 2
|
||||||
const switch_throttle = switch_updateInterval / 2
|
const switch_throttle = switch_updateInterval / 2
|
||||||
|
|
||||||
// You should be able to provide crypto signatures for this
|
// The switch locator represents the topology and network state dependent info about a node, minus the signatures that go with it.
|
||||||
// 1 signature per coord, from the *sender* to that coord
|
// Nodes will pick the best root they see, provided that the root continues to push out updates with new timestamps.
|
||||||
// E.g. A->B->C has sigA(A->B) and sigB(A->B->C)
|
// The coords represent a path from the root to a node.
|
||||||
|
// This path is generally part of a spanning tree, except possibly the last hop (it can loop when sending coords to your parent, but they see this and know not to use a looping path).
|
||||||
type switchLocator struct {
|
type switchLocator struct {
|
||||||
root sigPubKey
|
root sigPubKey
|
||||||
tstamp int64
|
tstamp int64
|
||||||
coords []switchPort
|
coords []switchPort
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns true if the first sigPubKey has a higher TreeID.
|
||||||
func firstIsBetter(first, second *sigPubKey) bool {
|
func firstIsBetter(first, second *sigPubKey) bool {
|
||||||
// Higher TreeID is better
|
// Higher TreeID is better
|
||||||
ftid := getTreeID(first)
|
ftid := getTreeID(first)
|
||||||
@ -44,6 +47,7 @@ func firstIsBetter(first, second *sigPubKey) bool {
|
|||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns a copy of the locator which can safely be mutated.
|
||||||
func (l *switchLocator) clone() switchLocator {
|
func (l *switchLocator) clone() switchLocator {
|
||||||
// Used to create a deep copy for use in messages
|
// Used to create a deep copy for use in messages
|
||||||
// Copy required because we need to mutate coords before sending
|
// Copy required because we need to mutate coords before sending
|
||||||
@ -54,6 +58,7 @@ func (l *switchLocator) clone() switchLocator {
|
|||||||
return loc
|
return loc
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets the distance a locator is from the provided destination coords, with the coords provided in []byte format (used to compress integers sent over the wire).
|
||||||
func (l *switchLocator) dist(dest []byte) int {
|
func (l *switchLocator) dist(dest []byte) int {
|
||||||
// Returns distance (on the tree) from these coords
|
// Returns distance (on the tree) from these coords
|
||||||
offset := 0
|
offset := 0
|
||||||
@ -84,6 +89,7 @@ func (l *switchLocator) dist(dest []byte) int {
|
|||||||
return dist
|
return dist
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets coords in wire encoded format, with *no* length prefix.
|
||||||
func (l *switchLocator) getCoords() []byte {
|
func (l *switchLocator) getCoords() []byte {
|
||||||
bs := make([]byte, 0, len(l.coords))
|
bs := make([]byte, 0, len(l.coords))
|
||||||
for _, coord := range l.coords {
|
for _, coord := range l.coords {
|
||||||
@ -93,6 +99,8 @@ func (l *switchLocator) getCoords() []byte {
|
|||||||
return bs
|
return bs
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns true if the this locator represents an ancestor of the locator given as an argument.
|
||||||
|
// Ancestor means that it's the parent node, or the parent of parent, and so on...
|
||||||
func (x *switchLocator) isAncestorOf(y *switchLocator) bool {
|
func (x *switchLocator) isAncestorOf(y *switchLocator) bool {
|
||||||
if x.root != y.root {
|
if x.root != y.root {
|
||||||
return false
|
return false
|
||||||
@ -108,43 +116,44 @@ func (x *switchLocator) isAncestorOf(y *switchLocator) bool {
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Information about a peer, used by the switch to build the tree and eventually make routing decisions.
|
||||||
type peerInfo struct {
|
type peerInfo struct {
|
||||||
key sigPubKey // ID of this peer
|
key sigPubKey // ID of this peer
|
||||||
locator switchLocator // Should be able to respond with signatures upon request
|
locator switchLocator // Should be able to respond with signatures upon request
|
||||||
degree uint64 // Self-reported degree
|
degree uint64 // Self-reported degree
|
||||||
coords []switchPort // Coords of this peer (taken from coords of the sent locator)
|
|
||||||
time time.Time // Time this node was last seen
|
time time.Time // Time this node was last seen
|
||||||
firstSeen time.Time
|
firstSeen time.Time
|
||||||
port switchPort // Interface number of this peer
|
port switchPort // Interface number of this peer
|
||||||
seq uint64 // Seq number we last saw this peer advertise
|
msg switchMsg // The wire switchMsg used
|
||||||
}
|
|
||||||
|
|
||||||
type switchMessage struct {
|
|
||||||
from sigPubKey // key of the sender
|
|
||||||
locator switchLocator // Locator advertised for the receiver, not the sender's loc!
|
|
||||||
seq uint64
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This is just a uint64 with a named type for clarity reasons.
|
||||||
type switchPort uint64
|
type switchPort uint64
|
||||||
|
|
||||||
|
// This is the subset of the information about a peer needed to make routing decisions, and it stored separately in an atomically accessed table, which gets hammered in the "hot loop" of the routing logic (see: peer.handleTraffic in peers.go).
|
||||||
type tableElem struct {
|
type tableElem struct {
|
||||||
port switchPort
|
port switchPort
|
||||||
locator switchLocator
|
locator switchLocator
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This is the subset of the information about all peers needed to make routing decisions, and it stored separately in an atomically accessed table, which gets hammered in the "hot loop" of the routing logic (see: peer.handleTraffic in peers.go).
|
||||||
type lookupTable struct {
|
type lookupTable struct {
|
||||||
self switchLocator
|
self switchLocator
|
||||||
elems []tableElem
|
elems []tableElem
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This is switch information which is mutable and needs to be modified by other goroutines, but is not accessed atomically.
|
||||||
|
// Use the switchTable functions to access it safely using the RWMutex for synchronization.
|
||||||
type switchData struct {
|
type switchData struct {
|
||||||
// All data that's mutable and used by exported Table methods
|
// All data that's mutable and used by exported Table methods
|
||||||
// To be read/written with atomic.Value Store/Load calls
|
// To be read/written with atomic.Value Store/Load calls
|
||||||
locator switchLocator
|
locator switchLocator
|
||||||
seq uint64 // Sequence number, reported to peers, so they know about changes
|
seq uint64 // Sequence number, reported to peers, so they know about changes
|
||||||
peers map[switchPort]peerInfo
|
peers map[switchPort]peerInfo
|
||||||
sigs []sigInfo
|
msg *switchMsg
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// All the information stored by the switch.
|
||||||
type switchTable struct {
|
type switchTable struct {
|
||||||
core *Core
|
core *Core
|
||||||
key sigPubKey // Our own key
|
key sigPubKey // Our own key
|
||||||
@ -157,6 +166,7 @@ type switchTable struct {
|
|||||||
table atomic.Value //lookupTable
|
table atomic.Value //lookupTable
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Initializes the switchTable struct.
|
||||||
func (t *switchTable) init(core *Core, key sigPubKey) {
|
func (t *switchTable) init(core *Core, key sigPubKey) {
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
t.core = core
|
t.core = core
|
||||||
@ -169,58 +179,41 @@ func (t *switchTable) init(core *Core, key sigPubKey) {
|
|||||||
t.drop = make(map[sigPubKey]int64)
|
t.drop = make(map[sigPubKey]int64)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *switchTable) start() error {
|
// Safely gets a copy of this node's locator.
|
||||||
doTicker := func() {
|
|
||||||
ticker := time.NewTicker(time.Second)
|
|
||||||
defer ticker.Stop()
|
|
||||||
for {
|
|
||||||
<-ticker.C
|
|
||||||
t.Tick()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
go doTicker()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *switchTable) getLocator() switchLocator {
|
func (t *switchTable) getLocator() switchLocator {
|
||||||
t.mutex.RLock()
|
t.mutex.RLock()
|
||||||
defer t.mutex.RUnlock()
|
defer t.mutex.RUnlock()
|
||||||
return t.data.locator.clone()
|
return t.data.locator.clone()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *switchTable) Tick() {
|
// Regular maintenance to possibly timeout/reset the root and similar.
|
||||||
|
func (t *switchTable) doMaintenance() {
|
||||||
// Periodic maintenance work to keep things internally consistent
|
// Periodic maintenance work to keep things internally consistent
|
||||||
t.mutex.Lock() // Write lock
|
t.mutex.Lock() // Write lock
|
||||||
defer t.mutex.Unlock() // Release lock when we're done
|
defer t.mutex.Unlock() // Release lock when we're done
|
||||||
t.cleanRoot()
|
t.cleanRoot()
|
||||||
t.cleanPeers()
|
|
||||||
t.cleanDropped()
|
t.cleanDropped()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Updates the root periodically if it is ourself, or promotes ourself to root if we're better than the current root or if the current root has timed out.
|
||||||
func (t *switchTable) cleanRoot() {
|
func (t *switchTable) cleanRoot() {
|
||||||
// TODO rethink how this is done?...
|
// TODO rethink how this is done?...
|
||||||
// Get rid of the root if it looks like its timed out
|
// Get rid of the root if it looks like its timed out
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
doUpdate := false
|
doUpdate := false
|
||||||
//fmt.Println("DEBUG clean root:", now.Sub(t.time))
|
|
||||||
if now.Sub(t.time) > switch_timeout {
|
if now.Sub(t.time) > switch_timeout {
|
||||||
//fmt.Println("root timed out", t.data.locator)
|
|
||||||
dropped := t.data.peers[t.parent]
|
dropped := t.data.peers[t.parent]
|
||||||
dropped.time = t.time
|
dropped.time = t.time
|
||||||
t.drop[t.data.locator.root] = t.data.locator.tstamp
|
t.drop[t.data.locator.root] = t.data.locator.tstamp
|
||||||
doUpdate = true
|
doUpdate = true
|
||||||
//t.core.log.Println("DEBUG: switch root timeout", len(t.drop))
|
|
||||||
}
|
}
|
||||||
// Or, if we're better than our root, root ourself
|
// Or, if we're better than our root, root ourself
|
||||||
if firstIsBetter(&t.key, &t.data.locator.root) {
|
if firstIsBetter(&t.key, &t.data.locator.root) {
|
||||||
//fmt.Println("root is worse than us", t.data.locator.Root)
|
|
||||||
doUpdate = true
|
doUpdate = true
|
||||||
//t.core.log.Println("DEBUG: switch root replace with self", t.data.locator.Root)
|
|
||||||
}
|
}
|
||||||
// Or, if we are the root, possibly update our timestamp
|
// Or, if we are the root, possibly update our timestamp
|
||||||
if t.data.locator.root == t.key &&
|
if t.data.locator.root == t.key &&
|
||||||
now.Sub(t.time) > switch_updateInterval {
|
now.Sub(t.time) > switch_updateInterval {
|
||||||
//fmt.Println("root is self and old, updating", t.data.locator.Root)
|
|
||||||
doUpdate = true
|
doUpdate = true
|
||||||
}
|
}
|
||||||
if doUpdate {
|
if doUpdate {
|
||||||
@ -235,25 +228,27 @@ func (t *switchTable) cleanRoot() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
t.data.locator = switchLocator{root: t.key, tstamp: now.Unix()}
|
t.data.locator = switchLocator{root: t.key, tstamp: now.Unix()}
|
||||||
t.data.sigs = nil
|
t.core.peers.sendSwitchMsgs()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *switchTable) cleanPeers() {
|
// Removes a peer.
|
||||||
now := time.Now()
|
// Must be called by the router mainLoop goroutine, e.g. call router.doAdmin with a lambda that calls this.
|
||||||
changed := false
|
// If the removed peer was this node's parent, it immediately tries to find a new parent.
|
||||||
for idx, info := range t.data.peers {
|
func (t *switchTable) unlockedRemovePeer(port switchPort) {
|
||||||
if info.port != switchPort(0) && now.Sub(info.time) > 6*time.Second /*switch_timeout*/ {
|
delete(t.data.peers, port)
|
||||||
//fmt.Println("peer timed out", t.key, info.locator)
|
t.updater.Store(&sync.Once{})
|
||||||
delete(t.data.peers, idx)
|
if port != t.parent {
|
||||||
changed = true
|
return
|
||||||
}
|
|
||||||
}
|
}
|
||||||
if changed {
|
for _, info := range t.data.peers {
|
||||||
t.updater.Store(&sync.Once{})
|
t.unlockedHandleMsg(&info.msg, info.port)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Dropped is a list of roots that are better than the current root, but stopped sending new timestamps.
|
||||||
|
// If we switch to a new root, and that root is better than an old root that previously timed out, then we can clean up the old dropped root infos.
|
||||||
|
// This function is called periodically to do that cleanup.
|
||||||
func (t *switchTable) cleanDropped() {
|
func (t *switchTable) cleanDropped() {
|
||||||
// TODO? only call this after root changes, not periodically
|
// TODO? only call this after root changes, not periodically
|
||||||
for root := range t.drop {
|
for root := range t.drop {
|
||||||
@ -263,33 +258,95 @@ func (t *switchTable) cleanDropped() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *switchTable) createMessage(port switchPort) (*switchMessage, []sigInfo) {
|
// A switchMsg contains the root node's sig key, timestamp, and signed per-hop information about a path from the root node to some other node in the network.
|
||||||
t.mutex.RLock()
|
// This is exchanged with peers to construct the spanning tree.
|
||||||
defer t.mutex.RUnlock()
|
// A subset of this information, excluding the signatures, is used to construct locators that are used elsewhere in the code.
|
||||||
msg := switchMessage{from: t.key, locator: t.data.locator.clone()}
|
type switchMsg struct {
|
||||||
msg.locator.coords = append(msg.locator.coords, port)
|
Root sigPubKey
|
||||||
msg.seq = t.data.seq
|
TStamp int64
|
||||||
return &msg, t.data.sigs
|
Hops []switchMsgHop
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *switchTable) handleMessage(msg *switchMessage, fromPort switchPort, sigs []sigInfo) {
|
// This represents the signed information about the path leading from the root the Next node, via the Port specified here.
|
||||||
|
type switchMsgHop struct {
|
||||||
|
Port switchPort
|
||||||
|
Next sigPubKey
|
||||||
|
Sig sigBytes
|
||||||
|
}
|
||||||
|
|
||||||
|
// This returns a *switchMsg to a copy of this node's current switchMsg, which can safely have additional information appended to Hops and sent to a peer.
|
||||||
|
func (t *switchTable) getMsg() *switchMsg {
|
||||||
|
t.mutex.RLock()
|
||||||
|
defer t.mutex.RUnlock()
|
||||||
|
if t.parent == 0 {
|
||||||
|
return &switchMsg{Root: t.key, TStamp: t.data.locator.tstamp}
|
||||||
|
} else if parent, isIn := t.data.peers[t.parent]; isIn {
|
||||||
|
msg := parent.msg
|
||||||
|
msg.Hops = append([]switchMsgHop(nil), msg.Hops...)
|
||||||
|
return &msg
|
||||||
|
} else {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// This function checks that the root information in a switchMsg is OK.
|
||||||
|
// In particular, that the root is better, or else the same as the current root but with a good timestamp, and that this root+timestamp haven't been dropped due to timeout.
|
||||||
|
func (t *switchTable) checkRoot(msg *switchMsg) bool {
|
||||||
|
// returns false if it's a dropped root, not a better root, or has an older timestamp
|
||||||
|
// returns true otherwise
|
||||||
|
// used elsewhere to keep inserting peers into the dht only if root info is OK
|
||||||
|
t.mutex.RLock()
|
||||||
|
defer t.mutex.RUnlock()
|
||||||
|
dropTstamp, isIn := t.drop[msg.Root]
|
||||||
|
switch {
|
||||||
|
case isIn && dropTstamp >= msg.TStamp:
|
||||||
|
return false
|
||||||
|
case firstIsBetter(&msg.Root, &t.data.locator.root):
|
||||||
|
return true
|
||||||
|
case t.data.locator.root != msg.Root:
|
||||||
|
return false
|
||||||
|
case t.data.locator.tstamp > msg.TStamp:
|
||||||
|
return false
|
||||||
|
default:
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// This is a mutexed wrapper to unlockedHandleMsg, and is called by the peer structs in peers.go to pass a switchMsg for that peer into the switch.
|
||||||
|
func (t *switchTable) handleMsg(msg *switchMsg, fromPort switchPort) {
|
||||||
t.mutex.Lock()
|
t.mutex.Lock()
|
||||||
defer t.mutex.Unlock()
|
defer t.mutex.Unlock()
|
||||||
|
t.unlockedHandleMsg(msg, fromPort)
|
||||||
|
}
|
||||||
|
|
||||||
|
// This updates the switch with information about a peer.
|
||||||
|
// Then the tricky part, it decides if it should update our own locator as a result.
|
||||||
|
// That happens if this node is already our parent, or is advertising a better root, or is advertising a better path to the same root, etc...
|
||||||
|
// There are a lot of very delicate order sensitive checks here, so its' best to just read the code if you need to understand what it's doing.
|
||||||
|
// It's very important to not change the order of the statements in the case function unless you're absolutely sure that it's safe, including safe if used along side nodes that used the previous order.
|
||||||
|
func (t *switchTable) unlockedHandleMsg(msg *switchMsg, fromPort switchPort) {
|
||||||
|
// TODO directly use a switchMsg instead of switchMessage + sigs
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
if len(msg.locator.coords) == 0 {
|
// Set up the sender peerInfo
|
||||||
return
|
var sender peerInfo
|
||||||
} // Should always have >=1 links
|
sender.locator.root = msg.Root
|
||||||
|
sender.locator.tstamp = msg.TStamp
|
||||||
|
prevKey := msg.Root
|
||||||
|
for _, hop := range msg.Hops {
|
||||||
|
// Build locator
|
||||||
|
sender.locator.coords = append(sender.locator.coords, hop.Port)
|
||||||
|
sender.key = prevKey
|
||||||
|
prevKey = hop.Next
|
||||||
|
}
|
||||||
|
sender.msg = *msg
|
||||||
oldSender, isIn := t.data.peers[fromPort]
|
oldSender, isIn := t.data.peers[fromPort]
|
||||||
if !isIn {
|
if !isIn {
|
||||||
oldSender.firstSeen = now
|
oldSender.firstSeen = now
|
||||||
}
|
}
|
||||||
sender := peerInfo{key: msg.from,
|
sender.firstSeen = oldSender.firstSeen
|
||||||
locator: msg.locator,
|
sender.port = fromPort
|
||||||
coords: msg.locator.coords[:len(msg.locator.coords)-1],
|
sender.time = now
|
||||||
time: now,
|
// Decide what to do
|
||||||
firstSeen: oldSender.firstSeen,
|
|
||||||
port: fromPort,
|
|
||||||
seq: msg.seq}
|
|
||||||
equiv := func(x *switchLocator, y *switchLocator) bool {
|
equiv := func(x *switchLocator, y *switchLocator) bool {
|
||||||
if x.root != y.root {
|
if x.root != y.root {
|
||||||
return false
|
return false
|
||||||
@ -305,20 +362,21 @@ func (t *switchTable) handleMessage(msg *switchMessage, fromPort switchPort, sig
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
doUpdate := false
|
doUpdate := false
|
||||||
if !equiv(&msg.locator, &oldSender.locator) {
|
if !equiv(&sender.locator, &oldSender.locator) {
|
||||||
doUpdate = true
|
doUpdate = true
|
||||||
|
//sender.firstSeen = now // TODO? uncomment to prevent flapping?
|
||||||
}
|
}
|
||||||
t.data.peers[fromPort] = sender
|
t.data.peers[fromPort] = sender
|
||||||
updateRoot := false
|
updateRoot := false
|
||||||
oldParent, isIn := t.data.peers[t.parent]
|
oldParent, isIn := t.data.peers[t.parent]
|
||||||
noParent := !isIn
|
noParent := !isIn
|
||||||
noLoop := func() bool {
|
noLoop := func() bool {
|
||||||
for idx := 0; idx < len(sigs)-1; idx++ {
|
for idx := 0; idx < len(msg.Hops)-1; idx++ {
|
||||||
if sigs[idx].next == t.core.sigPub {
|
if msg.Hops[idx].Next == t.core.sigPub {
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if msg.locator.root == t.core.sigPub {
|
if sender.locator.root == t.core.sigPub {
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
return true
|
return true
|
||||||
@ -327,46 +385,43 @@ func (t *switchTable) handleMessage(msg *switchMessage, fromPort switchPort, sig
|
|||||||
pTime := oldParent.time.Sub(oldParent.firstSeen) + switch_timeout
|
pTime := oldParent.time.Sub(oldParent.firstSeen) + switch_timeout
|
||||||
// Really want to compare sLen/sTime and pLen/pTime
|
// Really want to compare sLen/sTime and pLen/pTime
|
||||||
// Cross multiplied to avoid divide-by-zero
|
// Cross multiplied to avoid divide-by-zero
|
||||||
cost := len(msg.locator.coords) * int(pTime.Seconds())
|
cost := len(sender.locator.coords) * int(pTime.Seconds())
|
||||||
pCost := len(t.data.locator.coords) * int(sTime.Seconds())
|
pCost := len(t.data.locator.coords) * int(sTime.Seconds())
|
||||||
dropTstamp, isIn := t.drop[msg.locator.root]
|
dropTstamp, isIn := t.drop[sender.locator.root]
|
||||||
// Here be dragons
|
// Here be dragons
|
||||||
switch {
|
switch {
|
||||||
case !noLoop: // do nothing
|
case !noLoop: // do nothing
|
||||||
case isIn && dropTstamp >= msg.locator.tstamp: // do nothing
|
case isIn && dropTstamp >= sender.locator.tstamp: // do nothing
|
||||||
case firstIsBetter(&msg.locator.root, &t.data.locator.root):
|
case firstIsBetter(&sender.locator.root, &t.data.locator.root):
|
||||||
updateRoot = true
|
updateRoot = true
|
||||||
case t.data.locator.root != msg.locator.root: // do nothing
|
case t.data.locator.root != sender.locator.root: // do nothing
|
||||||
case t.data.locator.tstamp > msg.locator.tstamp: // do nothing
|
case t.data.locator.tstamp > sender.locator.tstamp: // do nothing
|
||||||
case noParent:
|
case noParent:
|
||||||
updateRoot = true
|
updateRoot = true
|
||||||
case cost < pCost:
|
case cost < pCost:
|
||||||
updateRoot = true
|
updateRoot = true
|
||||||
case sender.port != t.parent: // do nothing
|
case sender.port != t.parent: // do nothing
|
||||||
case !equiv(&msg.locator, &t.data.locator):
|
case !equiv(&sender.locator, &t.data.locator):
|
||||||
updateRoot = true
|
updateRoot = true
|
||||||
case now.Sub(t.time) < switch_throttle: // do nothing
|
case now.Sub(t.time) < switch_throttle: // do nothing
|
||||||
case msg.locator.tstamp > t.data.locator.tstamp:
|
case sender.locator.tstamp > t.data.locator.tstamp:
|
||||||
updateRoot = true
|
updateRoot = true
|
||||||
}
|
}
|
||||||
if updateRoot {
|
if updateRoot {
|
||||||
if !equiv(&msg.locator, &t.data.locator) {
|
if !equiv(&sender.locator, &t.data.locator) {
|
||||||
doUpdate = true
|
doUpdate = true
|
||||||
t.data.seq++
|
t.data.seq++
|
||||||
select {
|
select {
|
||||||
case t.core.router.reset <- struct{}{}:
|
case t.core.router.reset <- struct{}{}:
|
||||||
default:
|
default:
|
||||||
}
|
}
|
||||||
//t.core.log.Println("Switch update:", msg.locator.root, msg.locator.tstamp, msg.locator.coords)
|
|
||||||
//fmt.Println("Switch update:", msg.Locator.Root, msg.Locator.Tstamp, msg.Locator.Coords)
|
|
||||||
}
|
}
|
||||||
if t.data.locator.tstamp != msg.locator.tstamp {
|
if t.data.locator.tstamp != sender.locator.tstamp {
|
||||||
t.time = now
|
t.time = now
|
||||||
}
|
}
|
||||||
t.data.locator = msg.locator
|
t.data.locator = sender.locator
|
||||||
t.parent = sender.port
|
t.parent = sender.port
|
||||||
t.data.sigs = sigs
|
t.core.peers.sendSwitchMsgs()
|
||||||
//t.core.log.Println("Switch update:", msg.Locator.Root, msg.Locator.Tstamp, msg.Locator.Coords)
|
|
||||||
}
|
}
|
||||||
if doUpdate {
|
if doUpdate {
|
||||||
t.updater.Store(&sync.Once{})
|
t.updater.Store(&sync.Once{})
|
||||||
@ -374,6 +429,7 @@ func (t *switchTable) handleMessage(msg *switchMessage, fromPort switchPort, sig
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This is called via a sync.Once to update the atomically readable subset of switch information that gets used for routing decisions.
|
||||||
func (t *switchTable) updateTable() {
|
func (t *switchTable) updateTable() {
|
||||||
// WARNING this should only be called from within t.data.updater.Do()
|
// WARNING this should only be called from within t.data.updater.Do()
|
||||||
// It relies on the sync.Once for synchronization with messages and lookups
|
// It relies on the sync.Once for synchronization with messages and lookups
|
||||||
@ -401,50 +457,43 @@ func (t *switchTable) updateTable() {
|
|||||||
port: pinfo.port,
|
port: pinfo.port,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
sort.SliceStable(newTable.elems, func(i, j int) bool {
|
||||||
|
return t.data.peers[newTable.elems[i].port].firstSeen.Before(t.data.peers[newTable.elems[j].port].firstSeen)
|
||||||
|
})
|
||||||
t.table.Store(newTable)
|
t.table.Store(newTable)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *switchTable) lookup(dest []byte, ttl uint64) (switchPort, uint64) {
|
// This does the switch layer lookups that decide how to route traffic.
|
||||||
|
// Traffic uses greedy routing in a metric space, where the metric distance between nodes is equal to the distance between them on the tree.
|
||||||
|
// Traffic must be routed to a node that is closer to the destination via the metric space distance.
|
||||||
|
// In the event that two nodes are equally close, it gets routed to the one with the longest uptime (due to the order that things are iterated over).
|
||||||
|
// The size of the outgoing packet queue is added to a node's tree distance when the cost of forwarding to a node, subject to the constraint that the real tree distance puts them closer to the destination than ourself.
|
||||||
|
// Doing so adds a limited form of backpressure routing, based on local information, which allows us to forward traffic around *local* bottlenecks, provided that another greedy path exists.
|
||||||
|
func (t *switchTable) lookup(dest []byte) switchPort {
|
||||||
t.updater.Load().(*sync.Once).Do(t.updateTable)
|
t.updater.Load().(*sync.Once).Do(t.updateTable)
|
||||||
table := t.table.Load().(lookupTable)
|
table := t.table.Load().(lookupTable)
|
||||||
|
myDist := table.self.dist(dest)
|
||||||
|
if myDist == 0 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
// cost is in units of (expected distance) + (expected queue size), where expected distance is used as an approximation of the minimum backpressure gradient needed for packets to flow
|
||||||
ports := t.core.peers.getPorts()
|
ports := t.core.peers.getPorts()
|
||||||
getBandwidth := func(port switchPort) float64 {
|
|
||||||
var bandwidth float64
|
|
||||||
if p, isIn := ports[port]; isIn {
|
|
||||||
bandwidth = p.getBandwidth()
|
|
||||||
}
|
|
||||||
return bandwidth
|
|
||||||
}
|
|
||||||
var best switchPort
|
var best switchPort
|
||||||
myDist := table.self.dist(dest) //getDist(table.self.coords)
|
bestCost := int64(^uint64(0) >> 1)
|
||||||
if !(uint64(myDist) < ttl) {
|
|
||||||
return 0, 0
|
|
||||||
}
|
|
||||||
// score is in units of bandwidth / distance
|
|
||||||
bestScore := float64(-1)
|
|
||||||
for _, info := range table.elems {
|
for _, info := range table.elems {
|
||||||
dist := info.locator.dist(dest) //getDist(info.locator.coords)
|
dist := info.locator.dist(dest)
|
||||||
if !(dist < myDist) {
|
if !(dist < myDist) {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
score := getBandwidth(info.port)
|
p, isIn := ports[info.port]
|
||||||
score /= float64(1 + dist)
|
if !isIn {
|
||||||
if score > bestScore {
|
continue
|
||||||
|
}
|
||||||
|
cost := int64(dist) + p.getQueueSize()
|
||||||
|
if cost < bestCost {
|
||||||
best = info.port
|
best = info.port
|
||||||
bestScore = score
|
bestCost = cost
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
//t.core.log.Println("DEBUG: sending to", best, "bandwidth", getBandwidth(best))
|
return best
|
||||||
return best, uint64(myDist)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
|
||||||
|
|
||||||
//Signature stuff
|
|
||||||
|
|
||||||
type sigInfo struct {
|
|
||||||
next sigPubKey
|
|
||||||
sig sigBytes
|
|
||||||
}
|
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
|
||||||
|
@ -10,17 +10,24 @@ package yggdrasil
|
|||||||
// Could be used to DoS (connect, give someone else's keys, spew garbage)
|
// Could be used to DoS (connect, give someone else's keys, spew garbage)
|
||||||
// I guess the "peer" part should watch for link packets, disconnect?
|
// I guess the "peer" part should watch for link packets, disconnect?
|
||||||
|
|
||||||
import "net"
|
// TCP connections start with a metadata exchange.
|
||||||
import "time"
|
// It involves exchanging version numbers and crypto keys
|
||||||
import "errors"
|
// See version.go for version metadata format
|
||||||
import "sync"
|
|
||||||
import "fmt"
|
import (
|
||||||
import "bufio"
|
"errors"
|
||||||
import "golang.org/x/net/proxy"
|
"fmt"
|
||||||
|
"net"
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"golang.org/x/net/proxy"
|
||||||
|
)
|
||||||
|
|
||||||
const tcp_msgSize = 2048 + 65535 // TODO figure out what makes sense
|
const tcp_msgSize = 2048 + 65535 // TODO figure out what makes sense
|
||||||
|
|
||||||
// wrapper function for non tcp/ip connections
|
// Wrapper function for non tcp/ip connections.
|
||||||
func setNoDelay(c net.Conn, delay bool) {
|
func setNoDelay(c net.Conn, delay bool) {
|
||||||
tcp, ok := c.(*net.TCPConn)
|
tcp, ok := c.(*net.TCPConn)
|
||||||
if ok {
|
if ok {
|
||||||
@ -28,6 +35,7 @@ func setNoDelay(c net.Conn, delay bool) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// The TCP listener and information about active TCP connections, to avoid duplication.
|
||||||
type tcpInterface struct {
|
type tcpInterface struct {
|
||||||
core *Core
|
core *Core
|
||||||
serv net.Listener
|
serv net.Listener
|
||||||
@ -36,6 +44,8 @@ type tcpInterface struct {
|
|||||||
conns map[tcpInfo](chan struct{})
|
conns map[tcpInfo](chan struct{})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This is used as the key to a map that tracks existing connections, to prevent multiple connections to the same keys and local/remote address pair from occuring.
|
||||||
|
// Different address combinations are allowed, so multi-homing is still technically possible (but not necessarily advisable).
|
||||||
type tcpInfo struct {
|
type tcpInfo struct {
|
||||||
box boxPubKey
|
box boxPubKey
|
||||||
sig sigPubKey
|
sig sigPubKey
|
||||||
@ -43,15 +53,21 @@ type tcpInfo struct {
|
|||||||
remoteAddr string
|
remoteAddr string
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns the address of the listener.
|
||||||
func (iface *tcpInterface) getAddr() *net.TCPAddr {
|
func (iface *tcpInterface) getAddr() *net.TCPAddr {
|
||||||
return iface.serv.Addr().(*net.TCPAddr)
|
return iface.serv.Addr().(*net.TCPAddr)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Attempts to initiate a connection to the provided address.
|
||||||
func (iface *tcpInterface) connect(addr string) {
|
func (iface *tcpInterface) connect(addr string) {
|
||||||
iface.call(addr)
|
iface.call(addr)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Attempst to initiate a connection to the provided address, viathe provided socks proxy address.
|
||||||
func (iface *tcpInterface) connectSOCKS(socksaddr, peeraddr string) {
|
func (iface *tcpInterface) connectSOCKS(socksaddr, peeraddr string) {
|
||||||
|
// TODO make sure this doesn't keep attempting/killing connections when one is already active.
|
||||||
|
// I think some of the interaction between this and callWithConn needs work, so the dial isn't even attempted if there's already an outgoing call to peeraddr.
|
||||||
|
// Or maybe only if there's already an outgoing call to peeraddr via this socksaddr?
|
||||||
go func() {
|
go func() {
|
||||||
dialer, err := proxy.SOCKS5("tcp", socksaddr, nil, proxy.Direct)
|
dialer, err := proxy.SOCKS5("tcp", socksaddr, nil, proxy.Direct)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
@ -69,6 +85,7 @@ func (iface *tcpInterface) connectSOCKS(socksaddr, peeraddr string) {
|
|||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Initializes the struct.
|
||||||
func (iface *tcpInterface) init(core *Core, addr string) (err error) {
|
func (iface *tcpInterface) init(core *Core, addr string) (err error) {
|
||||||
iface.core = core
|
iface.core = core
|
||||||
|
|
||||||
@ -82,6 +99,7 @@ func (iface *tcpInterface) init(core *Core, addr string) (err error) {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Runs the listener, which spawns off goroutines for incoming connections.
|
||||||
func (iface *tcpInterface) listener() {
|
func (iface *tcpInterface) listener() {
|
||||||
defer iface.serv.Close()
|
defer iface.serv.Close()
|
||||||
iface.core.log.Println("Listening for TCP on:", iface.serv.Addr().String())
|
iface.core.log.Println("Listening for TCP on:", iface.serv.Addr().String())
|
||||||
@ -94,6 +112,7 @@ func (iface *tcpInterface) listener() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Called by connectSOCKS, it's like call but with the connection already established.
|
||||||
func (iface *tcpInterface) callWithConn(conn net.Conn) {
|
func (iface *tcpInterface) callWithConn(conn net.Conn) {
|
||||||
go func() {
|
go func() {
|
||||||
raddr := conn.RemoteAddr().String()
|
raddr := conn.RemoteAddr().String()
|
||||||
@ -114,6 +133,11 @@ func (iface *tcpInterface) callWithConn(conn net.Conn) {
|
|||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Checks if a connection already exists.
|
||||||
|
// If not, it adds it to the list of active outgoing calls (to block future attempts) and dials the address.
|
||||||
|
// If the dial is successful, it launches the handler.
|
||||||
|
// When finished, it removes the outgoing call, so reconnection attempts can be made later.
|
||||||
|
// This all happens in a separate goroutine that it spawns.
|
||||||
func (iface *tcpInterface) call(saddr string) {
|
func (iface *tcpInterface) call(saddr string) {
|
||||||
go func() {
|
go func() {
|
||||||
quit := false
|
quit := false
|
||||||
@ -139,29 +163,45 @@ func (iface *tcpInterface) call(saddr string) {
|
|||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This exchanges/checks connection metadata, sets up the peer struct, sets up the writer goroutine, and then runs the reader within the current goroutine.
|
||||||
|
// It defers a bunch of cleanup stuff to tear down all of these things when the reader exists (e.g. due to a closed connection or a timeout).
|
||||||
func (iface *tcpInterface) handler(sock net.Conn, incoming bool) {
|
func (iface *tcpInterface) handler(sock net.Conn, incoming bool) {
|
||||||
defer sock.Close()
|
defer sock.Close()
|
||||||
// Get our keys
|
// Get our keys
|
||||||
keys := []byte{}
|
myLinkPub, myLinkPriv := newBoxKeys() // ephemeral link keys
|
||||||
keys = append(keys, tcp_key[:]...)
|
meta := version_getBaseMetadata()
|
||||||
keys = append(keys, iface.core.boxPub[:]...)
|
meta.box = iface.core.boxPub
|
||||||
keys = append(keys, iface.core.sigPub[:]...)
|
meta.sig = iface.core.sigPub
|
||||||
_, err := sock.Write(keys)
|
meta.link = *myLinkPub
|
||||||
|
metaBytes := meta.encode()
|
||||||
|
_, err := sock.Write(metaBytes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
timeout := time.Now().Add(6 * time.Second)
|
timeout := time.Now().Add(6 * time.Second)
|
||||||
sock.SetReadDeadline(timeout)
|
sock.SetReadDeadline(timeout)
|
||||||
n, err := sock.Read(keys)
|
_, err = sock.Read(metaBytes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if n < len(keys) { /*panic("Partial key packet?") ;*/
|
meta = version_metadata{} // Reset to zero value
|
||||||
|
if !meta.decode(metaBytes) || !meta.check() {
|
||||||
|
// Failed to decode and check the metadata
|
||||||
|
// If it's a version mismatch issue, then print an error message
|
||||||
|
base := version_getBaseMetadata()
|
||||||
|
if meta.meta == base.meta {
|
||||||
|
if meta.ver > base.ver {
|
||||||
|
iface.core.log.Println("Failed to connect to node:", sock.RemoteAddr().String(), "version:", meta.ver)
|
||||||
|
} else if meta.ver == base.ver && meta.minorVer > base.minorVer {
|
||||||
|
iface.core.log.Println("Failed to connect to node:", sock.RemoteAddr().String(), "version:", fmt.Sprintf("%d.%d", meta.ver, meta.minorVer))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// TODO? Block forever to prevent future connection attempts? suppress future messages about the same node?
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
info := tcpInfo{}
|
info := tcpInfo{ // used as a map key, so don't include ephemeral link key
|
||||||
if !tcp_chop_keys(&info.box, &info.sig, &keys) { /*panic("Invalid key packet?") ;*/
|
box: meta.box,
|
||||||
return
|
sig: meta.sig,
|
||||||
}
|
}
|
||||||
// Quit the parent call if this is a connection to ourself
|
// Quit the parent call if this is a connection to ourself
|
||||||
equiv := func(k1, k2 []byte) bool {
|
equiv := func(k1, k2 []byte) bool {
|
||||||
@ -174,7 +214,7 @@ func (iface *tcpInterface) handler(sock net.Conn, incoming bool) {
|
|||||||
}
|
}
|
||||||
if equiv(info.box[:], iface.core.boxPub[:]) {
|
if equiv(info.box[:], iface.core.boxPub[:]) {
|
||||||
return
|
return
|
||||||
} // testing
|
}
|
||||||
if equiv(info.sig[:], iface.core.sigPub[:]) {
|
if equiv(info.sig[:], iface.core.sigPub[:]) {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@ -208,53 +248,62 @@ func (iface *tcpInterface) handler(sock net.Conn, incoming bool) {
|
|||||||
}()
|
}()
|
||||||
// Note that multiple connections to the same node are allowed
|
// Note that multiple connections to the same node are allowed
|
||||||
// E.g. over different interfaces
|
// E.g. over different interfaces
|
||||||
linkIn := make(chan []byte, 1)
|
p := iface.core.peers.newPeer(&info.box, &info.sig, getSharedKey(myLinkPriv, &meta.link))
|
||||||
p := iface.core.peers.newPeer(&info.box, &info.sig) //, in, out)
|
p.linkOut = make(chan []byte, 1)
|
||||||
in := func(bs []byte) {
|
in := func(bs []byte) {
|
||||||
p.handlePacket(bs, linkIn)
|
p.handlePacket(bs)
|
||||||
}
|
}
|
||||||
out := make(chan []byte, 32) // TODO? what size makes sense
|
out := make(chan []byte, 32) // TODO? what size makes sense
|
||||||
defer close(out)
|
defer close(out)
|
||||||
buf := bufio.NewWriterSize(sock, tcp_msgSize)
|
|
||||||
send := func(msg []byte) {
|
|
||||||
msgLen := wire_encode_uint64(uint64(len(msg)))
|
|
||||||
before := buf.Buffered()
|
|
||||||
start := time.Now()
|
|
||||||
buf.Write(tcp_msg[:])
|
|
||||||
buf.Write(msgLen)
|
|
||||||
buf.Write(msg)
|
|
||||||
timed := time.Since(start)
|
|
||||||
after := buf.Buffered()
|
|
||||||
written := (before + len(tcp_msg) + len(msgLen) + len(msg)) - after
|
|
||||||
if written > 0 {
|
|
||||||
p.updateBandwidth(written, timed)
|
|
||||||
}
|
|
||||||
util_putBytes(msg)
|
|
||||||
}
|
|
||||||
flush := func() {
|
|
||||||
size := buf.Buffered()
|
|
||||||
start := time.Now()
|
|
||||||
buf.Flush()
|
|
||||||
timed := time.Since(start)
|
|
||||||
p.updateBandwidth(size, timed)
|
|
||||||
}
|
|
||||||
go func() {
|
go func() {
|
||||||
|
var shadow int64
|
||||||
var stack [][]byte
|
var stack [][]byte
|
||||||
put := func(msg []byte) {
|
put := func(msg []byte) {
|
||||||
stack = append(stack, msg)
|
stack = append(stack, msg)
|
||||||
for len(stack) > 32 {
|
for len(stack) > 32 {
|
||||||
util_putBytes(stack[0])
|
util_putBytes(stack[0])
|
||||||
stack = stack[1:]
|
stack = stack[1:]
|
||||||
|
shadow++
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
for msg := range out {
|
send := func(msg []byte) {
|
||||||
put(msg)
|
msgLen := wire_encode_uint64(uint64(len(msg)))
|
||||||
|
buf := net.Buffers{tcp_msg[:], msgLen, msg}
|
||||||
|
buf.WriteTo(sock)
|
||||||
|
atomic.AddUint64(&p.bytesSent, uint64(len(tcp_msg)+len(msgLen)+len(msg)))
|
||||||
|
util_putBytes(msg)
|
||||||
|
}
|
||||||
|
timerInterval := 4 * time.Second
|
||||||
|
timer := time.NewTimer(timerInterval)
|
||||||
|
defer timer.Stop()
|
||||||
|
for {
|
||||||
|
if shadow != 0 {
|
||||||
|
p.updateQueueSize(-shadow)
|
||||||
|
shadow = 0
|
||||||
|
}
|
||||||
|
timer.Stop()
|
||||||
|
select {
|
||||||
|
case <-timer.C:
|
||||||
|
default:
|
||||||
|
}
|
||||||
|
timer.Reset(timerInterval)
|
||||||
|
select {
|
||||||
|
case _ = <-timer.C:
|
||||||
|
send(nil) // TCP keep-alive traffic
|
||||||
|
case msg := <-p.linkOut:
|
||||||
|
send(msg)
|
||||||
|
case msg, ok := <-out:
|
||||||
|
if !ok {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
put(msg)
|
||||||
|
}
|
||||||
for len(stack) > 0 {
|
for len(stack) > 0 {
|
||||||
// Keep trying to fill the stack (LIFO order) while sending
|
|
||||||
select {
|
select {
|
||||||
|
case msg := <-p.linkOut:
|
||||||
|
send(msg)
|
||||||
case msg, ok := <-out:
|
case msg, ok := <-out:
|
||||||
if !ok {
|
if !ok {
|
||||||
flush()
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
put(msg)
|
put(msg)
|
||||||
@ -262,26 +311,26 @@ func (iface *tcpInterface) handler(sock net.Conn, incoming bool) {
|
|||||||
msg := stack[len(stack)-1]
|
msg := stack[len(stack)-1]
|
||||||
stack = stack[:len(stack)-1]
|
stack = stack[:len(stack)-1]
|
||||||
send(msg)
|
send(msg)
|
||||||
|
p.updateQueueSize(-1)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
flush()
|
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
p.out = func(msg []byte) {
|
p.out = func(msg []byte) {
|
||||||
defer func() { recover() }()
|
defer func() { recover() }()
|
||||||
select {
|
select {
|
||||||
case out <- msg:
|
case out <- msg:
|
||||||
|
p.updateQueueSize(1)
|
||||||
default:
|
default:
|
||||||
util_putBytes(msg)
|
util_putBytes(msg)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
p.close = func() { sock.Close() }
|
p.close = func() { sock.Close() }
|
||||||
setNoDelay(sock, true)
|
setNoDelay(sock, true)
|
||||||
go p.linkLoop(linkIn)
|
go p.linkLoop()
|
||||||
defer func() {
|
defer func() {
|
||||||
// Put all of our cleanup here...
|
// Put all of our cleanup here...
|
||||||
p.core.peers.removePeer(p.port)
|
p.core.peers.removePeer(p.port)
|
||||||
close(linkIn)
|
|
||||||
}()
|
}()
|
||||||
them, _, _ := net.SplitHostPort(sock.RemoteAddr().String())
|
them, _, _ := net.SplitHostPort(sock.RemoteAddr().String())
|
||||||
themNodeID := getNodeID(&info.box)
|
themNodeID := getNodeID(&info.box)
|
||||||
@ -294,6 +343,9 @@ func (iface *tcpInterface) handler(sock net.Conn, incoming bool) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This reads from the socket into a []byte buffer for incomping messages.
|
||||||
|
// It copies completed messages out of the cache into a new slice, and passes them to the peer struct via the provided `in func([]byte)` argument.
|
||||||
|
// Then it shifts the incomplete fragments of data forward so future reads won't overwrite it.
|
||||||
func (iface *tcpInterface) reader(sock net.Conn, in func([]byte)) {
|
func (iface *tcpInterface) reader(sock net.Conn, in func([]byte)) {
|
||||||
bs := make([]byte, 2*tcp_msgSize)
|
bs := make([]byte, 2*tcp_msgSize)
|
||||||
frag := bs[:0]
|
frag := bs[:0]
|
||||||
@ -302,14 +354,12 @@ func (iface *tcpInterface) reader(sock net.Conn, in func([]byte)) {
|
|||||||
sock.SetReadDeadline(timeout)
|
sock.SetReadDeadline(timeout)
|
||||||
n, err := sock.Read(bs[len(frag):])
|
n, err := sock.Read(bs[len(frag):])
|
||||||
if err != nil || n == 0 {
|
if err != nil || n == 0 {
|
||||||
// iface.core.log.Println(err)
|
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
frag = bs[:len(frag)+n]
|
frag = bs[:len(frag)+n]
|
||||||
for {
|
for {
|
||||||
msg, ok, err := tcp_chop_msg(&frag)
|
msg, ok, err := tcp_chop_msg(&frag)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// iface.core.log.Println(err)
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if !ok {
|
if !ok {
|
||||||
@ -325,29 +375,13 @@ func (iface *tcpInterface) reader(sock net.Conn, in func([]byte)) {
|
|||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
// Magic bytes to check
|
// These are 4 bytes of padding used to catch if something went horribly wrong with the tcp connection.
|
||||||
var tcp_key = [...]byte{'k', 'e', 'y', 's'}
|
|
||||||
var tcp_msg = [...]byte{0xde, 0xad, 0xb1, 0x75} // "dead bits"
|
var tcp_msg = [...]byte{0xde, 0xad, 0xb1, 0x75} // "dead bits"
|
||||||
|
|
||||||
func tcp_chop_keys(box *boxPubKey, sig *sigPubKey, bs *[]byte) bool {
|
// This takes a pointer to a slice as an argument.
|
||||||
// This one is pretty simple: we know how long the message should be
|
// It checks if there's a complete message and, if so, slices out those parts and returns the message, true, and nil.
|
||||||
// So don't call this with a message that's too short
|
// If there's no error, but also no complete message, it returns nil, false, and nil.
|
||||||
if len(*bs) < len(tcp_key)+len(*box)+len(*sig) {
|
// If there's an error, it returns nil, false, and the error, which the reader then handles (currently, by returning from the reader, which causes the connection to close).
|
||||||
return false
|
|
||||||
}
|
|
||||||
for idx := range tcp_key {
|
|
||||||
if (*bs)[idx] != tcp_key[idx] {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
(*bs) = (*bs)[len(tcp_key):]
|
|
||||||
copy(box[:], *bs)
|
|
||||||
(*bs) = (*bs)[len(box):]
|
|
||||||
copy(sig[:], *bs)
|
|
||||||
(*bs) = (*bs)[len(sig):]
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func tcp_chop_msg(bs *[]byte) ([]byte, bool, error) {
|
func tcp_chop_msg(bs *[]byte) ([]byte, bool, error) {
|
||||||
// Returns msg, ok, err
|
// Returns msg, ok, err
|
||||||
if len(*bs) < len(tcp_msg) {
|
if len(*bs) < len(tcp_msg) {
|
||||||
|
@ -2,12 +2,15 @@ package yggdrasil
|
|||||||
|
|
||||||
// This manages the tun driver to send/recv packets to/from applications
|
// This manages the tun driver to send/recv packets to/from applications
|
||||||
|
|
||||||
import "github.com/songgao/packets/ethernet"
|
import (
|
||||||
import "github.com/yggdrasil-network/water"
|
"github.com/songgao/packets/ethernet"
|
||||||
|
"github.com/yggdrasil-network/water"
|
||||||
|
)
|
||||||
|
|
||||||
const tun_IPv6_HEADER_LENGTH = 40
|
const tun_IPv6_HEADER_LENGTH = 40
|
||||||
const tun_ETHER_HEADER_LENGTH = 14
|
const tun_ETHER_HEADER_LENGTH = 14
|
||||||
|
|
||||||
|
// Represents a running TUN/TAP interface.
|
||||||
type tunDevice struct {
|
type tunDevice struct {
|
||||||
core *Core
|
core *Core
|
||||||
icmpv6 icmpv6
|
icmpv6 icmpv6
|
||||||
@ -17,6 +20,9 @@ type tunDevice struct {
|
|||||||
iface *water.Interface
|
iface *water.Interface
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Defines which parameters are expected by default for a TUN/TAP adapter on a
|
||||||
|
// specific platform. These values are populated in the relevant tun_*.go for
|
||||||
|
// the platform being targeted. They must be set.
|
||||||
type tunDefaultParameters struct {
|
type tunDefaultParameters struct {
|
||||||
maximumIfMTU int
|
maximumIfMTU int
|
||||||
defaultIfMTU int
|
defaultIfMTU int
|
||||||
@ -24,6 +30,8 @@ type tunDefaultParameters struct {
|
|||||||
defaultIfTAPMode bool
|
defaultIfTAPMode bool
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets the maximum supported MTU for the platform based on the defaults in
|
||||||
|
// getDefaults().
|
||||||
func getSupportedMTU(mtu int) int {
|
func getSupportedMTU(mtu int) int {
|
||||||
if mtu > getDefaults().maximumIfMTU {
|
if mtu > getDefaults().maximumIfMTU {
|
||||||
return getDefaults().maximumIfMTU
|
return getDefaults().maximumIfMTU
|
||||||
@ -31,11 +39,14 @@ func getSupportedMTU(mtu int) int {
|
|||||||
return mtu
|
return mtu
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Initialises the TUN/TAP adapter.
|
||||||
func (tun *tunDevice) init(core *Core) {
|
func (tun *tunDevice) init(core *Core) {
|
||||||
tun.core = core
|
tun.core = core
|
||||||
tun.icmpv6.init(tun)
|
tun.icmpv6.init(tun)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Starts the setup process for the TUN/TAP adapter, and if successful, starts
|
||||||
|
// the read/write goroutines to handle packets on that interface.
|
||||||
func (tun *tunDevice) start(ifname string, iftapmode bool, addr string, mtu int) error {
|
func (tun *tunDevice) start(ifname string, iftapmode bool, addr string, mtu int) error {
|
||||||
if ifname == "none" {
|
if ifname == "none" {
|
||||||
return nil
|
return nil
|
||||||
@ -48,6 +59,9 @@ func (tun *tunDevice) start(ifname string, iftapmode bool, addr string, mtu int)
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Writes a packet to the TUN/TAP adapter. If the adapter is running in TAP
|
||||||
|
// mode then additional ethernet encapsulation is added for the benefit of the
|
||||||
|
// host operating system.
|
||||||
func (tun *tunDevice) write() error {
|
func (tun *tunDevice) write() error {
|
||||||
for {
|
for {
|
||||||
data := <-tun.recv
|
data := <-tun.recv
|
||||||
@ -75,6 +89,10 @@ func (tun *tunDevice) write() error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Reads any packets that are waiting on the TUN/TAP adapter. If the adapter
|
||||||
|
// is running in TAP mode then the ethernet headers will automatically be
|
||||||
|
// processed and stripped if necessary. If an ICMPv6 packet is found, then
|
||||||
|
// the relevant helper functions in icmpv6.go are called.
|
||||||
func (tun *tunDevice) read() error {
|
func (tun *tunDevice) read() error {
|
||||||
mtu := tun.mtu
|
mtu := tun.mtu
|
||||||
if tun.iface.IsTAP() {
|
if tun.iface.IsTAP() {
|
||||||
@ -109,6 +127,9 @@ func (tun *tunDevice) read() error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Closes the TUN/TAP adapter. This is only usually called when the Yggdrasil
|
||||||
|
// process stops. Typically this operation will happen quickly, but on macOS
|
||||||
|
// it can block until a read operation is completed.
|
||||||
func (tun *tunDevice) close() error {
|
func (tun *tunDevice) close() error {
|
||||||
if tun.iface == nil {
|
if tun.iface == nil {
|
||||||
return nil
|
return nil
|
||||||
|
@ -2,16 +2,18 @@
|
|||||||
|
|
||||||
package yggdrasil
|
package yggdrasil
|
||||||
|
|
||||||
import "unsafe"
|
import (
|
||||||
import "syscall"
|
"encoding/binary"
|
||||||
import "strings"
|
"os/exec"
|
||||||
import "strconv"
|
"strconv"
|
||||||
import "encoding/binary"
|
"strings"
|
||||||
import "os/exec"
|
"syscall"
|
||||||
|
"unsafe"
|
||||||
|
|
||||||
import "golang.org/x/sys/unix"
|
"golang.org/x/sys/unix"
|
||||||
|
|
||||||
import "github.com/yggdrasil-network/water"
|
"github.com/yggdrasil-network/water"
|
||||||
|
)
|
||||||
|
|
||||||
const SIOCSIFADDR_IN6 = (0x80000000) | ((288 & 0x1fff) << 16) | uint32(byte('i'))<<8 | 12
|
const SIOCSIFADDR_IN6 = (0x80000000) | ((288 & 0x1fff) << 16) | uint32(byte('i'))<<8 | 12
|
||||||
|
|
||||||
@ -70,6 +72,11 @@ type in6_ifreq_lifetime struct {
|
|||||||
ifru_addrlifetime in6_addrlifetime
|
ifru_addrlifetime in6_addrlifetime
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Sets the IPv6 address of the utun adapter. On all BSD platforms (FreeBSD,
|
||||||
|
// OpenBSD, NetBSD) an attempt is made to set the adapter properties by using
|
||||||
|
// a system socket and making syscalls to the kernel. This is not refined though
|
||||||
|
// and often doesn't work (if at all), therefore if a call fails, it resorts
|
||||||
|
// to calling "ifconfig" instead.
|
||||||
func (tun *tunDevice) setup(ifname string, iftapmode bool, addr string, mtu int) error {
|
func (tun *tunDevice) setup(ifname string, iftapmode bool, addr string, mtu int) error {
|
||||||
var config water.Config
|
var config water.Config
|
||||||
if ifname[:4] == "auto" {
|
if ifname[:4] == "auto" {
|
||||||
|
@ -2,14 +2,19 @@ package yggdrasil
|
|||||||
|
|
||||||
// The darwin platform specific tun parts
|
// The darwin platform specific tun parts
|
||||||
|
|
||||||
import "unsafe"
|
import (
|
||||||
import "strings"
|
"encoding/binary"
|
||||||
import "strconv"
|
"strconv"
|
||||||
import "encoding/binary"
|
"strings"
|
||||||
import "golang.org/x/sys/unix"
|
"unsafe"
|
||||||
|
|
||||||
import water "github.com/yggdrasil-network/water"
|
"golang.org/x/sys/unix"
|
||||||
|
|
||||||
|
water "github.com/yggdrasil-network/water"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Sane defaults for the Darwin/macOS platform. The "default" options may be
|
||||||
|
// may be replaced by the running configuration.
|
||||||
func getDefaults() tunDefaultParameters {
|
func getDefaults() tunDefaultParameters {
|
||||||
return tunDefaultParameters{
|
return tunDefaultParameters{
|
||||||
maximumIfMTU: 65535,
|
maximumIfMTU: 65535,
|
||||||
@ -19,6 +24,7 @@ func getDefaults() tunDefaultParameters {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Configures the "utun" adapter with the correct IPv6 address and MTU.
|
||||||
func (tun *tunDevice) setup(ifname string, iftapmode bool, addr string, mtu int) error {
|
func (tun *tunDevice) setup(ifname string, iftapmode bool, addr string, mtu int) error {
|
||||||
if iftapmode {
|
if iftapmode {
|
||||||
tun.core.log.Printf("TAP mode is not supported on this platform, defaulting to TUN")
|
tun.core.log.Printf("TAP mode is not supported on this platform, defaulting to TUN")
|
||||||
@ -65,6 +71,8 @@ type ifreq struct {
|
|||||||
ifru_mtu uint32
|
ifru_mtu uint32
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Sets the IPv6 address of the utun adapter. On Darwin/macOS this is done using
|
||||||
|
// a system socket and making direct syscalls to the kernel.
|
||||||
func (tun *tunDevice) setupAddress(addr string) error {
|
func (tun *tunDevice) setupAddress(addr string) error {
|
||||||
var fd int
|
var fd int
|
||||||
var err error
|
var err error
|
||||||
|
@ -1,5 +1,7 @@
|
|||||||
package yggdrasil
|
package yggdrasil
|
||||||
|
|
||||||
|
// Sane defaults for the FreeBSD platform. The "default" options may be
|
||||||
|
// may be replaced by the running configuration.
|
||||||
func getDefaults() tunDefaultParameters {
|
func getDefaults() tunDefaultParameters {
|
||||||
return tunDefaultParameters{
|
return tunDefaultParameters{
|
||||||
maximumIfMTU: 32767,
|
maximumIfMTU: 32767,
|
||||||
|
@ -1,16 +1,19 @@
|
|||||||
package yggdrasil
|
package yggdrasil
|
||||||
|
|
||||||
// The linux platform specific tun parts
|
// The linux platform specific tun parts
|
||||||
// It depends on iproute2 being installed to set things on the tun device
|
|
||||||
|
|
||||||
import "errors"
|
import (
|
||||||
import "fmt"
|
"errors"
|
||||||
import "net"
|
"fmt"
|
||||||
|
"net"
|
||||||
|
|
||||||
import water "github.com/yggdrasil-network/water"
|
"github.com/docker/libcontainer/netlink"
|
||||||
|
|
||||||
import "github.com/docker/libcontainer/netlink"
|
water "github.com/yggdrasil-network/water"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Sane defaults for the Linux platform. The "default" options may be
|
||||||
|
// may be replaced by the running configuration.
|
||||||
func getDefaults() tunDefaultParameters {
|
func getDefaults() tunDefaultParameters {
|
||||||
return tunDefaultParameters{
|
return tunDefaultParameters{
|
||||||
maximumIfMTU: 65535,
|
maximumIfMTU: 65535,
|
||||||
@ -20,6 +23,7 @@ func getDefaults() tunDefaultParameters {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Configures the TAP adapter with the correct IPv6 address and MTU.
|
||||||
func (tun *tunDevice) setup(ifname string, iftapmode bool, addr string, mtu int) error {
|
func (tun *tunDevice) setup(ifname string, iftapmode bool, addr string, mtu int) error {
|
||||||
var config water.Config
|
var config water.Config
|
||||||
if iftapmode {
|
if iftapmode {
|
||||||
@ -39,6 +43,10 @@ func (tun *tunDevice) setup(ifname string, iftapmode bool, addr string, mtu int)
|
|||||||
return tun.setupAddress(addr)
|
return tun.setupAddress(addr)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Configures the TAP adapter with the correct IPv6 address and MTU. Netlink
|
||||||
|
// is used to do this, so there is not a hard requirement on "ip" or "ifconfig"
|
||||||
|
// to exist on the system, but this will fail if Netlink is not present in the
|
||||||
|
// kernel (it nearly always is).
|
||||||
func (tun *tunDevice) setupAddress(addr string) error {
|
func (tun *tunDevice) setupAddress(addr string) error {
|
||||||
// Set address
|
// Set address
|
||||||
var netIF *net.Interface
|
var netIF *net.Interface
|
||||||
|
@ -1,5 +1,7 @@
|
|||||||
package yggdrasil
|
package yggdrasil
|
||||||
|
|
||||||
|
// Sane defaults for the NetBSD platform. The "default" options may be
|
||||||
|
// may be replaced by the running configuration.
|
||||||
func getDefaults() tunDefaultParameters {
|
func getDefaults() tunDefaultParameters {
|
||||||
return tunDefaultParameters{
|
return tunDefaultParameters{
|
||||||
maximumIfMTU: 9000,
|
maximumIfMTU: 9000,
|
||||||
|
@ -1,5 +1,7 @@
|
|||||||
package yggdrasil
|
package yggdrasil
|
||||||
|
|
||||||
|
// Sane defaults for the OpenBSD platform. The "default" options may be
|
||||||
|
// may be replaced by the running configuration.
|
||||||
func getDefaults() tunDefaultParameters {
|
func getDefaults() tunDefaultParameters {
|
||||||
return tunDefaultParameters{
|
return tunDefaultParameters{
|
||||||
maximumIfMTU: 16384,
|
maximumIfMTU: 16384,
|
||||||
|
@ -7,6 +7,8 @@ import water "github.com/yggdrasil-network/water"
|
|||||||
// This is to catch unsupported platforms
|
// This is to catch unsupported platforms
|
||||||
// If your platform supports tun devices, you could try configuring it manually
|
// If your platform supports tun devices, you could try configuring it manually
|
||||||
|
|
||||||
|
// These are sane defaults for any platform that has not been matched by one of
|
||||||
|
// the other tun_*.go files.
|
||||||
func getDefaults() tunDefaultParameters {
|
func getDefaults() tunDefaultParameters {
|
||||||
return tunDefaultParameters{
|
return tunDefaultParameters{
|
||||||
maximumIfMTU: 65535,
|
maximumIfMTU: 65535,
|
||||||
@ -16,6 +18,8 @@ func getDefaults() tunDefaultParameters {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Creates the TUN/TAP adapter, if supported by the Water library. Note that
|
||||||
|
// no guarantees are made at this point on an unsupported platform.
|
||||||
func (tun *tunDevice) setup(ifname string, iftapmode bool, addr string, mtu int) error {
|
func (tun *tunDevice) setup(ifname string, iftapmode bool, addr string, mtu int) error {
|
||||||
var config water.Config
|
var config water.Config
|
||||||
if iftapmode {
|
if iftapmode {
|
||||||
@ -32,6 +36,8 @@ func (tun *tunDevice) setup(ifname string, iftapmode bool, addr string, mtu int)
|
|||||||
return tun.setupAddress(addr)
|
return tun.setupAddress(addr)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// We don't know how to set the IPv6 address on an unknown platform, therefore
|
||||||
|
// write about it to stdout and don't try to do anything further.
|
||||||
func (tun *tunDevice) setupAddress(addr string) error {
|
func (tun *tunDevice) setupAddress(addr string) error {
|
||||||
tun.core.log.Println("Platform not supported, you must set the address of", tun.iface.Name(), "to", addr)
|
tun.core.log.Println("Platform not supported, you must set the address of", tun.iface.Name(), "to", addr)
|
||||||
return nil
|
return nil
|
||||||
|
@ -1,12 +1,17 @@
|
|||||||
package yggdrasil
|
package yggdrasil
|
||||||
|
|
||||||
import water "github.com/yggdrasil-network/water"
|
import (
|
||||||
import "os/exec"
|
"fmt"
|
||||||
import "strings"
|
"os/exec"
|
||||||
import "fmt"
|
"strings"
|
||||||
|
|
||||||
|
water "github.com/yggdrasil-network/water"
|
||||||
|
)
|
||||||
|
|
||||||
// This is to catch Windows platforms
|
// This is to catch Windows platforms
|
||||||
|
|
||||||
|
// Sane defaults for the Windows platform. The "default" options may be
|
||||||
|
// may be replaced by the running configuration.
|
||||||
func getDefaults() tunDefaultParameters {
|
func getDefaults() tunDefaultParameters {
|
||||||
return tunDefaultParameters{
|
return tunDefaultParameters{
|
||||||
maximumIfMTU: 65535,
|
maximumIfMTU: 65535,
|
||||||
@ -16,6 +21,9 @@ func getDefaults() tunDefaultParameters {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Configures the TAP adapter with the correct IPv6 address and MTU. On Windows
|
||||||
|
// we don't make use of a direct operating system API to do this - we instead
|
||||||
|
// delegate the hard work to "netsh".
|
||||||
func (tun *tunDevice) setup(ifname string, iftapmode bool, addr string, mtu int) error {
|
func (tun *tunDevice) setup(ifname string, iftapmode bool, addr string, mtu int) error {
|
||||||
if !iftapmode {
|
if !iftapmode {
|
||||||
tun.core.log.Printf("TUN mode is not supported on this platform, defaulting to TAP")
|
tun.core.log.Printf("TUN mode is not supported on this platform, defaulting to TAP")
|
||||||
@ -63,6 +71,7 @@ func (tun *tunDevice) setup(ifname string, iftapmode bool, addr string, mtu int)
|
|||||||
return tun.setupAddress(addr)
|
return tun.setupAddress(addr)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Sets the MTU of the TAP adapter.
|
||||||
func (tun *tunDevice) setupMTU(mtu int) error {
|
func (tun *tunDevice) setupMTU(mtu int) error {
|
||||||
// Set MTU
|
// Set MTU
|
||||||
cmd := exec.Command("netsh", "interface", "ipv6", "set", "subinterface",
|
cmd := exec.Command("netsh", "interface", "ipv6", "set", "subinterface",
|
||||||
@ -79,6 +88,7 @@ func (tun *tunDevice) setupMTU(mtu int) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Sets the IPv6 address of the TAP adapter.
|
||||||
func (tun *tunDevice) setupAddress(addr string) error {
|
func (tun *tunDevice) setupAddress(addr string) error {
|
||||||
// Set address
|
// Set address
|
||||||
cmd := exec.Command("netsh", "interface", "ipv6", "add", "address",
|
cmd := exec.Command("netsh", "interface", "ipv6", "add", "address",
|
||||||
|
@ -1,394 +0,0 @@
|
|||||||
package yggdrasil
|
|
||||||
|
|
||||||
// This communicates with peers via UDP
|
|
||||||
// It's not as well tested or debugged as the TCP transport
|
|
||||||
// It's intended to use UDP, so debugging/optimzing this is a high priority
|
|
||||||
// TODO? use golang.org/x/net/ipv6.PacketConn's ReadBatch and WriteBatch?
|
|
||||||
// To send all chunks of a message / recv all available chunks in one syscall
|
|
||||||
// That might be faster on supported platforms, but it needs investigation
|
|
||||||
// Chunks are currently murged, but outgoing messages aren't chunked
|
|
||||||
// This is just to support chunking in the future, if it's needed and debugged
|
|
||||||
// Basically, right now we might send UDP packets that are too large
|
|
||||||
|
|
||||||
// TODO remove old/unused code and better document live code
|
|
||||||
|
|
||||||
import "net"
|
|
||||||
import "time"
|
|
||||||
import "sync"
|
|
||||||
import "fmt"
|
|
||||||
|
|
||||||
type udpInterface struct {
|
|
||||||
core *Core
|
|
||||||
sock *net.UDPConn // Or more general PacketConn?
|
|
||||||
mutex sync.RWMutex // each conn has an owner goroutine
|
|
||||||
conns map[connAddr]*connInfo
|
|
||||||
}
|
|
||||||
|
|
||||||
type connAddr struct {
|
|
||||||
ip [16]byte
|
|
||||||
port int
|
|
||||||
zone string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *connAddr) fromUDPAddr(u *net.UDPAddr) {
|
|
||||||
copy(c.ip[:], u.IP.To16())
|
|
||||||
c.port = u.Port
|
|
||||||
c.zone = u.Zone
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *connAddr) toUDPAddr() *net.UDPAddr {
|
|
||||||
var u net.UDPAddr
|
|
||||||
u.IP = make([]byte, 16)
|
|
||||||
copy(u.IP, c.ip[:])
|
|
||||||
u.Port = c.port
|
|
||||||
u.Zone = c.zone
|
|
||||||
return &u
|
|
||||||
}
|
|
||||||
|
|
||||||
type connInfo struct {
|
|
||||||
name string
|
|
||||||
addr connAddr
|
|
||||||
peer *peer
|
|
||||||
linkIn chan []byte
|
|
||||||
keysIn chan *udpKeys
|
|
||||||
closeIn chan *udpKeys
|
|
||||||
timeout int // count of how many heartbeats have been missed
|
|
||||||
in func([]byte)
|
|
||||||
out chan []byte
|
|
||||||
countIn uint8
|
|
||||||
countOut uint8
|
|
||||||
chunkSize uint16
|
|
||||||
}
|
|
||||||
|
|
||||||
type udpKeys struct {
|
|
||||||
box boxPubKey
|
|
||||||
sig sigPubKey
|
|
||||||
}
|
|
||||||
|
|
||||||
func (iface *udpInterface) getAddr() *net.UDPAddr {
|
|
||||||
return iface.sock.LocalAddr().(*net.UDPAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (iface *udpInterface) connect(saddr string) {
|
|
||||||
udpAddr, err := net.ResolveUDPAddr("udp", saddr)
|
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
var addr connAddr
|
|
||||||
addr.fromUDPAddr(udpAddr)
|
|
||||||
iface.mutex.RLock()
|
|
||||||
_, isIn := iface.conns[addr]
|
|
||||||
iface.mutex.RUnlock()
|
|
||||||
if !isIn {
|
|
||||||
iface.sendKeys(addr)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (iface *udpInterface) init(core *Core, addr string) (err error) {
|
|
||||||
iface.core = core
|
|
||||||
udpAddr, err := net.ResolveUDPAddr("udp", addr)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
iface.sock, err = net.ListenUDP("udp", udpAddr)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
iface.conns = make(map[connAddr]*connInfo)
|
|
||||||
go iface.reader()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (iface *udpInterface) sendKeys(addr connAddr) {
|
|
||||||
udpAddr := addr.toUDPAddr()
|
|
||||||
msg := []byte{}
|
|
||||||
msg = udp_encode(msg, 0, 0, 0, nil)
|
|
||||||
msg = append(msg, iface.core.boxPub[:]...)
|
|
||||||
msg = append(msg, iface.core.sigPub[:]...)
|
|
||||||
iface.sock.WriteToUDP(msg, udpAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (iface *udpInterface) sendClose(addr connAddr) {
|
|
||||||
udpAddr := addr.toUDPAddr()
|
|
||||||
msg := []byte{}
|
|
||||||
msg = udp_encode(msg, 0, 1, 0, nil)
|
|
||||||
msg = append(msg, iface.core.boxPub[:]...)
|
|
||||||
msg = append(msg, iface.core.sigPub[:]...)
|
|
||||||
iface.sock.WriteToUDP(msg, udpAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
func udp_isKeys(msg []byte) bool {
|
|
||||||
keyLen := 3 + boxPubKeyLen + sigPubKeyLen
|
|
||||||
return len(msg) == keyLen && msg[0] == 0x00 && msg[1] == 0x00
|
|
||||||
}
|
|
||||||
|
|
||||||
func udp_isClose(msg []byte) bool {
|
|
||||||
keyLen := 3 + boxPubKeyLen + sigPubKeyLen
|
|
||||||
return len(msg) == keyLen && msg[0] == 0x00 && msg[1] == 0x01
|
|
||||||
}
|
|
||||||
|
|
||||||
func (iface *udpInterface) startConn(info *connInfo) {
|
|
||||||
ticker := time.NewTicker(6 * time.Second)
|
|
||||||
defer ticker.Stop()
|
|
||||||
defer func() {
|
|
||||||
// Cleanup
|
|
||||||
iface.mutex.Lock()
|
|
||||||
delete(iface.conns, info.addr)
|
|
||||||
iface.mutex.Unlock()
|
|
||||||
iface.core.peers.removePeer(info.peer.port)
|
|
||||||
close(info.linkIn)
|
|
||||||
close(info.keysIn)
|
|
||||||
close(info.closeIn)
|
|
||||||
close(info.out)
|
|
||||||
iface.core.log.Println("Removing peer:", info.name)
|
|
||||||
}()
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case ks := <-info.closeIn:
|
|
||||||
{
|
|
||||||
if ks.box == info.peer.box && ks.sig == info.peer.sig {
|
|
||||||
// TODO? secure this somehow
|
|
||||||
// Maybe add a signature and sequence number (timestamp) to close and keys?
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
case ks := <-info.keysIn:
|
|
||||||
{
|
|
||||||
// FIXME? need signatures/sequence-numbers or something
|
|
||||||
// Spoofers could lock out a peer with fake/bad keys
|
|
||||||
if ks.box == info.peer.box && ks.sig == info.peer.sig {
|
|
||||||
info.timeout = 0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
case <-ticker.C:
|
|
||||||
{
|
|
||||||
if info.timeout > 10 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
info.timeout++
|
|
||||||
iface.sendKeys(info.addr)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (iface *udpInterface) handleClose(msg []byte, addr connAddr) {
|
|
||||||
//defer util_putBytes(msg)
|
|
||||||
var ks udpKeys
|
|
||||||
_, _, _, bs := udp_decode(msg)
|
|
||||||
switch {
|
|
||||||
case !wire_chop_slice(ks.box[:], &bs):
|
|
||||||
return
|
|
||||||
case !wire_chop_slice(ks.sig[:], &bs):
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if ks.box == iface.core.boxPub {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if ks.sig == iface.core.sigPub {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
iface.mutex.RLock()
|
|
||||||
conn, isIn := iface.conns[addr]
|
|
||||||
iface.mutex.RUnlock()
|
|
||||||
if !isIn {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
func() {
|
|
||||||
defer func() { recover() }()
|
|
||||||
select {
|
|
||||||
case conn.closeIn <- &ks:
|
|
||||||
default:
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (iface *udpInterface) handleKeys(msg []byte, addr connAddr) {
|
|
||||||
//defer util_putBytes(msg)
|
|
||||||
var ks udpKeys
|
|
||||||
_, _, _, bs := udp_decode(msg)
|
|
||||||
switch {
|
|
||||||
case !wire_chop_slice(ks.box[:], &bs):
|
|
||||||
return
|
|
||||||
case !wire_chop_slice(ks.sig[:], &bs):
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if ks.box == iface.core.boxPub {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if ks.sig == iface.core.sigPub {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
iface.mutex.RLock()
|
|
||||||
conn, isIn := iface.conns[addr]
|
|
||||||
iface.mutex.RUnlock()
|
|
||||||
if !isIn {
|
|
||||||
udpAddr := addr.toUDPAddr()
|
|
||||||
// Check if we're authorized to connect to this key / IP
|
|
||||||
// TODO monitor and always allow outgoing connections
|
|
||||||
if !iface.core.peers.isAllowedEncryptionPublicKey(&ks.box) {
|
|
||||||
// Allow unauthorized peers if they're link-local
|
|
||||||
if !udpAddr.IP.IsLinkLocalUnicast() {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
themNodeID := getNodeID(&ks.box)
|
|
||||||
themAddr := address_addrForNodeID(themNodeID)
|
|
||||||
themAddrString := net.IP(themAddr[:]).String()
|
|
||||||
themString := fmt.Sprintf("%s@%s", themAddrString, udpAddr.String())
|
|
||||||
conn = &connInfo{
|
|
||||||
name: themString,
|
|
||||||
addr: connAddr(addr),
|
|
||||||
peer: iface.core.peers.newPeer(&ks.box, &ks.sig),
|
|
||||||
linkIn: make(chan []byte, 1),
|
|
||||||
keysIn: make(chan *udpKeys, 1),
|
|
||||||
closeIn: make(chan *udpKeys, 1),
|
|
||||||
out: make(chan []byte, 32),
|
|
||||||
chunkSize: 576 - 60 - 8 - 3, // max safe - max ip - udp header - chunk overhead
|
|
||||||
}
|
|
||||||
if udpAddr.IP.IsLinkLocalUnicast() {
|
|
||||||
ifce, err := net.InterfaceByName(udpAddr.Zone)
|
|
||||||
if ifce != nil && err == nil {
|
|
||||||
conn.chunkSize = uint16(ifce.MTU) - 60 - 8 - 3
|
|
||||||
}
|
|
||||||
}
|
|
||||||
var inChunks uint8
|
|
||||||
var inBuf []byte
|
|
||||||
conn.in = func(bs []byte) {
|
|
||||||
//defer util_putBytes(bs)
|
|
||||||
chunks, chunk, count, payload := udp_decode(bs)
|
|
||||||
if count != conn.countIn {
|
|
||||||
if len(inBuf) > 0 {
|
|
||||||
// Something went wrong
|
|
||||||
// Forward whatever we have
|
|
||||||
// Maybe the destination can do something about it
|
|
||||||
msg := append(util_getBytes(), inBuf...)
|
|
||||||
conn.peer.handlePacket(msg, conn.linkIn)
|
|
||||||
}
|
|
||||||
inChunks = 0
|
|
||||||
inBuf = inBuf[:0]
|
|
||||||
conn.countIn = count
|
|
||||||
}
|
|
||||||
if chunk <= chunks && chunk == inChunks+1 {
|
|
||||||
inChunks += 1
|
|
||||||
inBuf = append(inBuf, payload...)
|
|
||||||
if chunks != chunk {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
msg := append(util_getBytes(), inBuf...)
|
|
||||||
conn.peer.handlePacket(msg, conn.linkIn)
|
|
||||||
inBuf = inBuf[:0]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
conn.peer.out = func(msg []byte) {
|
|
||||||
defer func() { recover() }()
|
|
||||||
select {
|
|
||||||
case conn.out <- msg:
|
|
||||||
default:
|
|
||||||
util_putBytes(msg)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
go func() {
|
|
||||||
var out []byte
|
|
||||||
var chunks [][]byte
|
|
||||||
for msg := range conn.out {
|
|
||||||
chunks = chunks[:0]
|
|
||||||
bs := msg
|
|
||||||
for len(bs) > int(conn.chunkSize) {
|
|
||||||
chunks, bs = append(chunks, bs[:conn.chunkSize]), bs[conn.chunkSize:]
|
|
||||||
}
|
|
||||||
chunks = append(chunks, bs)
|
|
||||||
if len(chunks) > 255 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
start := time.Now()
|
|
||||||
for idx, bs := range chunks {
|
|
||||||
nChunks, nChunk, count := uint8(len(chunks)), uint8(idx)+1, conn.countOut
|
|
||||||
out = udp_encode(out[:0], nChunks, nChunk, count, bs)
|
|
||||||
//iface.core.log.Println("DEBUG out:", nChunks, nChunk, count, len(bs))
|
|
||||||
iface.sock.WriteToUDP(out, udpAddr)
|
|
||||||
}
|
|
||||||
timed := time.Since(start)
|
|
||||||
conn.countOut += 1
|
|
||||||
conn.peer.updateBandwidth(len(msg), timed)
|
|
||||||
util_putBytes(msg)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
//*/
|
|
||||||
conn.peer.close = func() { iface.sendClose(conn.addr) }
|
|
||||||
iface.mutex.Lock()
|
|
||||||
iface.conns[addr] = conn
|
|
||||||
iface.mutex.Unlock()
|
|
||||||
iface.core.log.Println("Adding peer:", conn.name)
|
|
||||||
go iface.startConn(conn)
|
|
||||||
go conn.peer.linkLoop(conn.linkIn)
|
|
||||||
iface.sendKeys(conn.addr)
|
|
||||||
}
|
|
||||||
func() {
|
|
||||||
defer func() { recover() }()
|
|
||||||
select {
|
|
||||||
case conn.keysIn <- &ks:
|
|
||||||
default:
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (iface *udpInterface) handlePacket(msg []byte, addr connAddr) {
|
|
||||||
iface.mutex.RLock()
|
|
||||||
if conn, isIn := iface.conns[addr]; isIn {
|
|
||||||
conn.in(msg)
|
|
||||||
}
|
|
||||||
iface.mutex.RUnlock()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (iface *udpInterface) reader() {
|
|
||||||
iface.core.log.Println("Listening for UDP on:", iface.sock.LocalAddr().String())
|
|
||||||
bs := make([]byte, 65536) // This needs to be large enough for everything...
|
|
||||||
for {
|
|
||||||
n, udpAddr, err := iface.sock.ReadFromUDP(bs)
|
|
||||||
//iface.core.log.Println("DEBUG: read:", bs[0], bs[1], bs[2], n)
|
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
|
||||||
break
|
|
||||||
}
|
|
||||||
msg := bs[:n]
|
|
||||||
var addr connAddr
|
|
||||||
addr.fromUDPAddr(udpAddr)
|
|
||||||
switch {
|
|
||||||
case udp_isKeys(msg):
|
|
||||||
var them address
|
|
||||||
copy(them[:], udpAddr.IP.To16())
|
|
||||||
if them.isValid() {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if udpAddr.IP.IsLinkLocalUnicast() {
|
|
||||||
if len(iface.core.ifceExpr) == 0 {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
for _, expr := range iface.core.ifceExpr {
|
|
||||||
if expr.MatchString(udpAddr.Zone) {
|
|
||||||
iface.handleKeys(msg, addr)
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
case udp_isClose(msg):
|
|
||||||
iface.handleClose(msg, addr)
|
|
||||||
default:
|
|
||||||
iface.handlePacket(msg, addr)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
|
||||||
|
|
||||||
func udp_decode(bs []byte) (chunks, chunk, count uint8, payload []byte) {
|
|
||||||
if len(bs) >= 3 {
|
|
||||||
chunks, chunk, count, payload = bs[0], bs[1], bs[2], bs[3:]
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func udp_encode(out []byte, chunks, chunk, count uint8, payload []byte) []byte {
|
|
||||||
return append(append(out, chunks, chunk, count), payload...)
|
|
||||||
}
|
|
@ -4,42 +4,33 @@ package yggdrasil
|
|||||||
|
|
||||||
import "runtime"
|
import "runtime"
|
||||||
|
|
||||||
//import "sync"
|
// A wrapper around runtime.Gosched() so it doesn't need to be imported elsewhere.
|
||||||
|
|
||||||
func util_yield() {
|
func util_yield() {
|
||||||
runtime.Gosched()
|
runtime.Gosched()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// A wrapper around runtime.LockOSThread() so it doesn't need to be imported elsewhere.
|
||||||
func util_lockthread() {
|
func util_lockthread() {
|
||||||
runtime.LockOSThread()
|
runtime.LockOSThread()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// A wrapper around runtime.UnlockOSThread() so it doesn't need to be imported elsewhere.
|
||||||
func util_unlockthread() {
|
func util_unlockthread() {
|
||||||
runtime.UnlockOSThread()
|
runtime.UnlockOSThread()
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Used previously, but removed because casting to an interface{} allocates...
|
// This is used to buffer recently used slices of bytes, to prevent allocations in the hot loops.
|
||||||
var byteStore sync.Pool = sync.Pool{
|
// It's used like a sync.Pool, but with a fixed size and typechecked without type casts to/from interface{} (which were making the profiles look ugly).
|
||||||
New: func () interface{} { return []byte(nil) },
|
|
||||||
}
|
|
||||||
|
|
||||||
func util_getBytes() []byte {
|
|
||||||
return byteStore.Get().([]byte)[:0]
|
|
||||||
}
|
|
||||||
|
|
||||||
func util_putBytes(bs []byte) {
|
|
||||||
byteStore.Put(bs) // This is the part that allocates
|
|
||||||
}
|
|
||||||
*/
|
|
||||||
|
|
||||||
var byteStore chan []byte
|
var byteStore chan []byte
|
||||||
|
|
||||||
|
// Initializes the byteStore
|
||||||
func util_initByteStore() {
|
func util_initByteStore() {
|
||||||
if byteStore == nil {
|
if byteStore == nil {
|
||||||
byteStore = make(chan []byte, 32)
|
byteStore = make(chan []byte, 32)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets an empty slice from the byte store, if one is available, or else returns a new nil slice.
|
||||||
func util_getBytes() []byte {
|
func util_getBytes() []byte {
|
||||||
select {
|
select {
|
||||||
case bs := <-byteStore:
|
case bs := <-byteStore:
|
||||||
@ -49,6 +40,7 @@ func util_getBytes() []byte {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Puts a slice in the store, if there's room, or else returns and lets the slice get collected.
|
||||||
func util_putBytes(bs []byte) {
|
func util_putBytes(bs []byte) {
|
||||||
select {
|
select {
|
||||||
case byteStore <- bs:
|
case byteStore <- bs:
|
||||||
|
78
src/yggdrasil/version.go
Normal file
78
src/yggdrasil/version.go
Normal file
@ -0,0 +1,78 @@
|
|||||||
|
package yggdrasil
|
||||||
|
|
||||||
|
// This file contains the version metadata struct
|
||||||
|
// Used in the inital connection setup and key exchange
|
||||||
|
// Some of this could arguably go in wire.go instead
|
||||||
|
|
||||||
|
// This is the version-specific metadata exchanged at the start of a connection.
|
||||||
|
// It must always beign with the 4 bytes "meta" and a wire formatted uint64 major version number.
|
||||||
|
// The current version also includes a minor version number, and the box/sig/link keys that need to be exchanged to open an connection.
|
||||||
|
type version_metadata struct {
|
||||||
|
meta [4]byte
|
||||||
|
ver uint64 // 1 byte in this version
|
||||||
|
// Everything after this point potentially depends on the version number, and is subject to change in future versions
|
||||||
|
minorVer uint64 // 1 byte in this version
|
||||||
|
box boxPubKey
|
||||||
|
sig sigPubKey
|
||||||
|
link boxPubKey
|
||||||
|
}
|
||||||
|
|
||||||
|
// Gets a base metadata with no keys set, but with the correct version numbers.
|
||||||
|
func version_getBaseMetadata() version_metadata {
|
||||||
|
return version_metadata{
|
||||||
|
meta: [4]byte{'m', 'e', 't', 'a'},
|
||||||
|
ver: 0,
|
||||||
|
minorVer: 2,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Gest the length of the metadata for this version, used to know how many bytes to read from the start of a connection.
|
||||||
|
func version_getMetaLength() (mlen int) {
|
||||||
|
mlen += 4 // meta
|
||||||
|
mlen += 1 // ver, as long as it's < 127, which it is in this version
|
||||||
|
mlen += 1 // minorVer, as long as it's < 127, which it is in this version
|
||||||
|
mlen += boxPubKeyLen // box
|
||||||
|
mlen += sigPubKeyLen // sig
|
||||||
|
mlen += boxPubKeyLen // link
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Encodes version metadata into its wire format.
|
||||||
|
func (m *version_metadata) encode() []byte {
|
||||||
|
bs := make([]byte, 0, version_getMetaLength())
|
||||||
|
bs = append(bs, m.meta[:]...)
|
||||||
|
bs = append(bs, wire_encode_uint64(m.ver)...)
|
||||||
|
bs = append(bs, wire_encode_uint64(m.minorVer)...)
|
||||||
|
bs = append(bs, m.box[:]...)
|
||||||
|
bs = append(bs, m.sig[:]...)
|
||||||
|
bs = append(bs, m.link[:]...)
|
||||||
|
if len(bs) != version_getMetaLength() {
|
||||||
|
panic("Inconsistent metadata length")
|
||||||
|
}
|
||||||
|
return bs
|
||||||
|
}
|
||||||
|
|
||||||
|
// Decodes version metadata from its wire format into the struct.
|
||||||
|
func (m *version_metadata) decode(bs []byte) bool {
|
||||||
|
switch {
|
||||||
|
case !wire_chop_slice(m.meta[:], &bs):
|
||||||
|
return false
|
||||||
|
case !wire_chop_uint64(&m.ver, &bs):
|
||||||
|
return false
|
||||||
|
case !wire_chop_uint64(&m.minorVer, &bs):
|
||||||
|
return false
|
||||||
|
case !wire_chop_slice(m.box[:], &bs):
|
||||||
|
return false
|
||||||
|
case !wire_chop_slice(m.sig[:], &bs):
|
||||||
|
return false
|
||||||
|
case !wire_chop_slice(m.link[:], &bs):
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Checks that the "meta" bytes and the version numbers are the expected values.
|
||||||
|
func (m *version_metadata) check() bool {
|
||||||
|
base := version_getBaseMetadata()
|
||||||
|
return base.meta == m.meta && base.ver == m.ver && base.minorVer == m.minorVer
|
||||||
|
}
|
@ -5,31 +5,26 @@ package yggdrasil
|
|||||||
|
|
||||||
// TODO clean up unused/commented code, and add better comments to whatever is left
|
// TODO clean up unused/commented code, and add better comments to whatever is left
|
||||||
|
|
||||||
// Packet types, as an Encode_uint64 at the start of each packet
|
// Packet types, as wire_encode_uint64(type) at the start of each packet
|
||||||
// TODO? make things still work after reordering (after things stabilize more?)
|
|
||||||
// Type safety would also be nice, `type wire_type uint64`, rewrite as needed?
|
|
||||||
const (
|
const (
|
||||||
wire_Traffic = iota // data being routed somewhere, handle for crypto
|
wire_Traffic = iota // data being routed somewhere, handle for crypto
|
||||||
wire_ProtocolTraffic // protocol traffic, pub keys for crypto
|
wire_ProtocolTraffic // protocol traffic, pub keys for crypto
|
||||||
wire_LinkProtocolTraffic // link proto traffic, pub keys for crypto
|
wire_LinkProtocolTraffic // link proto traffic, pub keys for crypto
|
||||||
wire_SwitchAnnounce // inside protocol traffic header
|
wire_SwitchMsg // inside link protocol traffic header
|
||||||
wire_SwitchHopRequest // inside protocol traffic header
|
|
||||||
wire_SwitchHop // inside protocol traffic header
|
|
||||||
wire_SessionPing // inside protocol traffic header
|
wire_SessionPing // inside protocol traffic header
|
||||||
wire_SessionPong // inside protocol traffic header
|
wire_SessionPong // inside protocol traffic header
|
||||||
wire_DHTLookupRequest // inside protocol traffic header
|
wire_DHTLookupRequest // inside protocol traffic header
|
||||||
wire_DHTLookupResponse // inside protocol traffic header
|
wire_DHTLookupResponse // inside protocol traffic header
|
||||||
wire_SearchRequest // inside protocol traffic header
|
|
||||||
wire_SearchResponse // inside protocol traffic header
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Encode uint64 using a variable length scheme
|
// Calls wire_put_uint64 on a nil slice.
|
||||||
// Similar to binary.Uvarint, but big-endian
|
|
||||||
func wire_encode_uint64(elem uint64) []byte {
|
func wire_encode_uint64(elem uint64) []byte {
|
||||||
return wire_put_uint64(elem, nil)
|
return wire_put_uint64(elem, nil)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Occasionally useful for appending to an existing slice (if there's room)
|
// Encode uint64 using a variable length scheme.
|
||||||
|
// Similar to binary.Uvarint, but big-endian.
|
||||||
func wire_put_uint64(elem uint64, out []byte) []byte {
|
func wire_put_uint64(elem uint64, out []byte) []byte {
|
||||||
bs := make([]byte, 0, 10)
|
bs := make([]byte, 0, 10)
|
||||||
bs = append(bs, byte(elem&0x7f))
|
bs = append(bs, byte(elem&0x7f))
|
||||||
@ -45,6 +40,7 @@ func wire_put_uint64(elem uint64, out []byte) []byte {
|
|||||||
return append(out, bs...)
|
return append(out, bs...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns the length of a wire encoded uint64 of this value.
|
||||||
func wire_uint64_len(elem uint64) int {
|
func wire_uint64_len(elem uint64) int {
|
||||||
l := 1
|
l := 1
|
||||||
for e := elem >> 7; e > 0; e >>= 7 {
|
for e := elem >> 7; e > 0; e >>= 7 {
|
||||||
@ -53,8 +49,8 @@ func wire_uint64_len(elem uint64) int {
|
|||||||
return l
|
return l
|
||||||
}
|
}
|
||||||
|
|
||||||
// Decode uint64 from a []byte slice
|
// Decode uint64 from a []byte slice.
|
||||||
// Returns the decoded uint64 and the number of bytes used
|
// Returns the decoded uint64 and the number of bytes used.
|
||||||
func wire_decode_uint64(bs []byte) (uint64, int) {
|
func wire_decode_uint64(bs []byte) (uint64, int) {
|
||||||
length := 0
|
length := 0
|
||||||
elem := uint64(0)
|
elem := uint64(0)
|
||||||
@ -69,29 +65,22 @@ func wire_decode_uint64(bs []byte) (uint64, int) {
|
|||||||
return elem, length
|
return elem, length
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Converts an int64 into uint64 so it can be written to the wire.
|
||||||
|
// Non-negative integers are mapped to even integers: 0 -> 0, 1 -> 2, etc.
|
||||||
|
// Negative integres are mapped to odd integes: -1 -> 1, -2 -> 3, etc.
|
||||||
|
// This means the least significant bit is a sign bit.
|
||||||
func wire_intToUint(i int64) uint64 {
|
func wire_intToUint(i int64) uint64 {
|
||||||
var u uint64
|
return ((uint64(-(i+1))<<1)|0x01)*(uint64(i)>>63) + (uint64(i)<<1)*(^uint64(i)>>63)
|
||||||
if i < 0 {
|
|
||||||
u = uint64(-i) << 1
|
|
||||||
u |= 0x01 // sign bit
|
|
||||||
} else {
|
|
||||||
u = uint64(i) << 1
|
|
||||||
}
|
|
||||||
return u
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Converts uint64 back to int64, genreally when being read from the wire.
|
||||||
func wire_intFromUint(u uint64) int64 {
|
func wire_intFromUint(u uint64) int64 {
|
||||||
var i int64
|
return int64(u&0x01)*(-int64(u>>1)-1) + int64(^u&0x01)*int64(u>>1)
|
||||||
i = int64(u >> 1)
|
|
||||||
if u&0x01 != 0 {
|
|
||||||
i *= -1
|
|
||||||
}
|
|
||||||
return i
|
|
||||||
}
|
}
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
// Takes coords, returns coords prefixed with encoded coord length
|
// Takes coords, returns coords prefixed with encoded coord length.
|
||||||
func wire_encode_coords(coords []byte) []byte {
|
func wire_encode_coords(coords []byte) []byte {
|
||||||
coordLen := wire_encode_uint64(uint64(len(coords)))
|
coordLen := wire_encode_uint64(uint64(len(coords)))
|
||||||
bs := make([]byte, 0, len(coordLen)+len(coords))
|
bs := make([]byte, 0, len(coordLen)+len(coords))
|
||||||
@ -100,14 +89,17 @@ func wire_encode_coords(coords []byte) []byte {
|
|||||||
return bs
|
return bs
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Puts a length prefix and the coords into bs, returns the wire formatted coords.
|
||||||
|
// Useful in hot loops where we don't want to allocate and we know the rest of the later parts of the slice are safe to overwrite.
|
||||||
func wire_put_coords(coords []byte, bs []byte) []byte {
|
func wire_put_coords(coords []byte, bs []byte) []byte {
|
||||||
bs = wire_put_uint64(uint64(len(coords)), bs)
|
bs = wire_put_uint64(uint64(len(coords)), bs)
|
||||||
bs = append(bs, coords...)
|
bs = append(bs, coords...)
|
||||||
return bs
|
return bs
|
||||||
}
|
}
|
||||||
|
|
||||||
// Takes a packet that begins with coords (starting with coord length)
|
// Takes a slice that begins with coords (starting with coord length).
|
||||||
// Returns a slice of coords and the number of bytes read
|
// Returns a slice of coords and the number of bytes read.
|
||||||
|
// Used as part of various decode() functions for structs.
|
||||||
func wire_decode_coords(packet []byte) ([]byte, int) {
|
func wire_decode_coords(packet []byte) ([]byte, int) {
|
||||||
coordLen, coordBegin := wire_decode_uint64(packet)
|
coordLen, coordBegin := wire_decode_uint64(packet)
|
||||||
coordEnd := coordBegin + int(coordLen)
|
coordEnd := coordBegin + int(coordLen)
|
||||||
@ -119,145 +111,52 @@ func wire_decode_coords(packet []byte) ([]byte, int) {
|
|||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
// Announces that we can send parts of a Message with a particular seq
|
// Encodes a swtichMsg into its wire format.
|
||||||
type msgAnnounce struct {
|
func (m *switchMsg) encode() []byte {
|
||||||
Root sigPubKey
|
bs := wire_encode_uint64(wire_SwitchMsg)
|
||||||
Tstamp int64
|
|
||||||
Seq uint64
|
|
||||||
Len uint64
|
|
||||||
//Deg uint64
|
|
||||||
Rseq uint64
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *msgAnnounce) encode() []byte {
|
|
||||||
bs := wire_encode_uint64(wire_SwitchAnnounce)
|
|
||||||
bs = append(bs, m.Root[:]...)
|
bs = append(bs, m.Root[:]...)
|
||||||
bs = append(bs, wire_encode_uint64(wire_intToUint(m.Tstamp))...)
|
bs = append(bs, wire_encode_uint64(wire_intToUint(m.TStamp))...)
|
||||||
bs = append(bs, wire_encode_uint64(m.Seq)...)
|
for _, hop := range m.Hops {
|
||||||
bs = append(bs, wire_encode_uint64(m.Len)...)
|
bs = append(bs, wire_encode_uint64(uint64(hop.Port))...)
|
||||||
bs = append(bs, wire_encode_uint64(m.Rseq)...)
|
bs = append(bs, hop.Next[:]...)
|
||||||
|
bs = append(bs, hop.Sig[:]...)
|
||||||
|
}
|
||||||
return bs
|
return bs
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m *msgAnnounce) decode(bs []byte) bool {
|
// Decodes a wire formatted switchMsg into the struct, returns true if successful.
|
||||||
|
func (m *switchMsg) decode(bs []byte) bool {
|
||||||
var pType uint64
|
var pType uint64
|
||||||
var tstamp uint64
|
var tstamp uint64
|
||||||
switch {
|
switch {
|
||||||
case !wire_chop_uint64(&pType, &bs):
|
case !wire_chop_uint64(&pType, &bs):
|
||||||
return false
|
return false
|
||||||
case pType != wire_SwitchAnnounce:
|
case pType != wire_SwitchMsg:
|
||||||
return false
|
return false
|
||||||
case !wire_chop_slice(m.Root[:], &bs):
|
case !wire_chop_slice(m.Root[:], &bs):
|
||||||
return false
|
return false
|
||||||
case !wire_chop_uint64(&tstamp, &bs):
|
case !wire_chop_uint64(&tstamp, &bs):
|
||||||
return false
|
return false
|
||||||
case !wire_chop_uint64(&m.Seq, &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_uint64(&m.Len, &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_uint64(&m.Rseq, &bs):
|
|
||||||
return false
|
|
||||||
}
|
}
|
||||||
m.Tstamp = wire_intFromUint(tstamp)
|
m.TStamp = wire_intFromUint(tstamp)
|
||||||
|
for len(bs) > 0 {
|
||||||
|
var hop switchMsgHop
|
||||||
|
switch {
|
||||||
|
case !wire_chop_uint64((*uint64)(&hop.Port), &bs):
|
||||||
|
return false
|
||||||
|
case !wire_chop_slice(hop.Next[:], &bs):
|
||||||
|
return false
|
||||||
|
case !wire_chop_slice(hop.Sig[:], &bs):
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
m.Hops = append(m.Hops, hop)
|
||||||
|
}
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
type msgHopReq struct {
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
Root sigPubKey
|
|
||||||
Tstamp int64
|
|
||||||
Seq uint64
|
|
||||||
Hop uint64
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *msgHopReq) encode() []byte {
|
|
||||||
bs := wire_encode_uint64(wire_SwitchHopRequest)
|
|
||||||
bs = append(bs, m.Root[:]...)
|
|
||||||
bs = append(bs, wire_encode_uint64(wire_intToUint(m.Tstamp))...)
|
|
||||||
bs = append(bs, wire_encode_uint64(m.Seq)...)
|
|
||||||
bs = append(bs, wire_encode_uint64(m.Hop)...)
|
|
||||||
return bs
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *msgHopReq) decode(bs []byte) bool {
|
|
||||||
var pType uint64
|
|
||||||
var tstamp uint64
|
|
||||||
switch {
|
|
||||||
case !wire_chop_uint64(&pType, &bs):
|
|
||||||
return false
|
|
||||||
case pType != wire_SwitchHopRequest:
|
|
||||||
return false
|
|
||||||
case !wire_chop_slice(m.Root[:], &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_uint64(&tstamp, &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_uint64(&m.Seq, &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_uint64(&m.Hop, &bs):
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
m.Tstamp = wire_intFromUint(tstamp)
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
type msgHop struct {
|
|
||||||
Root sigPubKey
|
|
||||||
Tstamp int64
|
|
||||||
Seq uint64
|
|
||||||
Hop uint64
|
|
||||||
Port switchPort
|
|
||||||
Next sigPubKey
|
|
||||||
Sig sigBytes
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *msgHop) encode() []byte {
|
|
||||||
bs := wire_encode_uint64(wire_SwitchHop)
|
|
||||||
bs = append(bs, m.Root[:]...)
|
|
||||||
bs = append(bs, wire_encode_uint64(wire_intToUint(m.Tstamp))...)
|
|
||||||
bs = append(bs, wire_encode_uint64(m.Seq)...)
|
|
||||||
bs = append(bs, wire_encode_uint64(m.Hop)...)
|
|
||||||
bs = append(bs, wire_encode_uint64(uint64(m.Port))...)
|
|
||||||
bs = append(bs, m.Next[:]...)
|
|
||||||
bs = append(bs, m.Sig[:]...)
|
|
||||||
return bs
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *msgHop) decode(bs []byte) bool {
|
|
||||||
var pType uint64
|
|
||||||
var tstamp uint64
|
|
||||||
switch {
|
|
||||||
case !wire_chop_uint64(&pType, &bs):
|
|
||||||
return false
|
|
||||||
case pType != wire_SwitchHop:
|
|
||||||
return false
|
|
||||||
case !wire_chop_slice(m.Root[:], &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_uint64(&tstamp, &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_uint64(&m.Seq, &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_uint64(&m.Hop, &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_uint64((*uint64)(&m.Port), &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_slice(m.Next[:], &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_slice(m.Sig[:], &bs):
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
m.Tstamp = wire_intFromUint(tstamp)
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Format used to check signatures only, so no need to also support decoding
|
|
||||||
func wire_encode_locator(loc *switchLocator) []byte {
|
|
||||||
coords := wire_encode_coords(loc.getCoords())
|
|
||||||
var bs []byte
|
|
||||||
bs = append(bs, loc.root[:]...)
|
|
||||||
bs = append(bs, wire_encode_uint64(wire_intToUint(loc.tstamp))...)
|
|
||||||
bs = append(bs, coords...)
|
|
||||||
return bs
|
|
||||||
}
|
|
||||||
|
|
||||||
|
// A utility function used to copy bytes into a slice and advance the beginning of the source slice, returns true if successful.
|
||||||
func wire_chop_slice(toSlice []byte, fromSlice *[]byte) bool {
|
func wire_chop_slice(toSlice []byte, fromSlice *[]byte) bool {
|
||||||
if len(*fromSlice) < len(toSlice) {
|
if len(*fromSlice) < len(toSlice) {
|
||||||
return false
|
return false
|
||||||
@ -267,6 +166,7 @@ func wire_chop_slice(toSlice []byte, fromSlice *[]byte) bool {
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// A utility function to extract coords from a slice and advance the source slices, returning true if successful.
|
||||||
func wire_chop_coords(toCoords *[]byte, fromSlice *[]byte) bool {
|
func wire_chop_coords(toCoords *[]byte, fromSlice *[]byte) bool {
|
||||||
coords, coordLen := wire_decode_coords(*fromSlice)
|
coords, coordLen := wire_decode_coords(*fromSlice)
|
||||||
if coordLen == 0 {
|
if coordLen == 0 {
|
||||||
@ -277,6 +177,7 @@ func wire_chop_coords(toCoords *[]byte, fromSlice *[]byte) bool {
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// A utility function to extract a wire encoded uint64 into the provided pointer while advancing the start of the source slice, returning true if successful.
|
||||||
func wire_chop_uint64(toUInt64 *uint64, fromSlice *[]byte) bool {
|
func wire_chop_uint64(toUInt64 *uint64, fromSlice *[]byte) bool {
|
||||||
dec, decLen := wire_decode_uint64(*fromSlice)
|
dec, decLen := wire_decode_uint64(*fromSlice)
|
||||||
if decLen == 0 {
|
if decLen == 0 {
|
||||||
@ -291,19 +192,18 @@ func wire_chop_uint64(toUInt64 *uint64, fromSlice *[]byte) bool {
|
|||||||
|
|
||||||
// Wire traffic packets
|
// Wire traffic packets
|
||||||
|
|
||||||
|
// The wire format for ordinary IPv6 traffic encapsulated by the network.
|
||||||
type wire_trafficPacket struct {
|
type wire_trafficPacket struct {
|
||||||
TTL uint64
|
|
||||||
Coords []byte
|
Coords []byte
|
||||||
Handle handle
|
Handle handle
|
||||||
Nonce boxNonce
|
Nonce boxNonce
|
||||||
Payload []byte
|
Payload []byte
|
||||||
}
|
}
|
||||||
|
|
||||||
// This is basically MarshalBinary, but decode doesn't allow that...
|
// Encodes a wire_trafficPacket into its wire format.
|
||||||
func (p *wire_trafficPacket) encode() []byte {
|
func (p *wire_trafficPacket) encode() []byte {
|
||||||
bs := util_getBytes()
|
bs := util_getBytes()
|
||||||
bs = wire_put_uint64(wire_Traffic, bs)
|
bs = wire_put_uint64(wire_Traffic, bs)
|
||||||
bs = wire_put_uint64(p.TTL, bs)
|
|
||||||
bs = wire_put_coords(p.Coords, bs)
|
bs = wire_put_coords(p.Coords, bs)
|
||||||
bs = append(bs, p.Handle[:]...)
|
bs = append(bs, p.Handle[:]...)
|
||||||
bs = append(bs, p.Nonce[:]...)
|
bs = append(bs, p.Nonce[:]...)
|
||||||
@ -311,7 +211,7 @@ func (p *wire_trafficPacket) encode() []byte {
|
|||||||
return bs
|
return bs
|
||||||
}
|
}
|
||||||
|
|
||||||
// Not just UnmarshalBinary becuase the original slice isn't always copied from
|
// Decodes an encoded wire_trafficPacket into the struct, returning true if successful.
|
||||||
func (p *wire_trafficPacket) decode(bs []byte) bool {
|
func (p *wire_trafficPacket) decode(bs []byte) bool {
|
||||||
var pType uint64
|
var pType uint64
|
||||||
switch {
|
switch {
|
||||||
@ -319,8 +219,6 @@ func (p *wire_trafficPacket) decode(bs []byte) bool {
|
|||||||
return false
|
return false
|
||||||
case pType != wire_Traffic:
|
case pType != wire_Traffic:
|
||||||
return false
|
return false
|
||||||
case !wire_chop_uint64(&p.TTL, &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_coords(&p.Coords, &bs):
|
case !wire_chop_coords(&p.Coords, &bs):
|
||||||
return false
|
return false
|
||||||
case !wire_chop_slice(p.Handle[:], &bs):
|
case !wire_chop_slice(p.Handle[:], &bs):
|
||||||
@ -332,8 +230,8 @@ func (p *wire_trafficPacket) decode(bs []byte) bool {
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// The wire format for protocol traffic, such as dht req/res or session ping/pong packets.
|
||||||
type wire_protoTrafficPacket struct {
|
type wire_protoTrafficPacket struct {
|
||||||
TTL uint64
|
|
||||||
Coords []byte
|
Coords []byte
|
||||||
ToKey boxPubKey
|
ToKey boxPubKey
|
||||||
FromKey boxPubKey
|
FromKey boxPubKey
|
||||||
@ -341,10 +239,10 @@ type wire_protoTrafficPacket struct {
|
|||||||
Payload []byte
|
Payload []byte
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Encodes a wire_protoTrafficPacket into its wire format.
|
||||||
func (p *wire_protoTrafficPacket) encode() []byte {
|
func (p *wire_protoTrafficPacket) encode() []byte {
|
||||||
coords := wire_encode_coords(p.Coords)
|
coords := wire_encode_coords(p.Coords)
|
||||||
bs := wire_encode_uint64(wire_ProtocolTraffic)
|
bs := wire_encode_uint64(wire_ProtocolTraffic)
|
||||||
bs = append(bs, wire_encode_uint64(p.TTL)...)
|
|
||||||
bs = append(bs, coords...)
|
bs = append(bs, coords...)
|
||||||
bs = append(bs, p.ToKey[:]...)
|
bs = append(bs, p.ToKey[:]...)
|
||||||
bs = append(bs, p.FromKey[:]...)
|
bs = append(bs, p.FromKey[:]...)
|
||||||
@ -353,6 +251,7 @@ func (p *wire_protoTrafficPacket) encode() []byte {
|
|||||||
return bs
|
return bs
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Decodes an encoded wire_protoTrafficPacket into the struct, returning true if successful.
|
||||||
func (p *wire_protoTrafficPacket) decode(bs []byte) bool {
|
func (p *wire_protoTrafficPacket) decode(bs []byte) bool {
|
||||||
var pType uint64
|
var pType uint64
|
||||||
switch {
|
switch {
|
||||||
@ -360,8 +259,6 @@ func (p *wire_protoTrafficPacket) decode(bs []byte) bool {
|
|||||||
return false
|
return false
|
||||||
case pType != wire_ProtocolTraffic:
|
case pType != wire_ProtocolTraffic:
|
||||||
return false
|
return false
|
||||||
case !wire_chop_uint64(&p.TTL, &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_coords(&p.Coords, &bs):
|
case !wire_chop_coords(&p.Coords, &bs):
|
||||||
return false
|
return false
|
||||||
case !wire_chop_slice(p.ToKey[:], &bs):
|
case !wire_chop_slice(p.ToKey[:], &bs):
|
||||||
@ -375,11 +272,16 @@ func (p *wire_protoTrafficPacket) decode(bs []byte) bool {
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// The wire format for link protocol traffic, namely switchMsg.
|
||||||
|
// There's really two layers of this, with the outer layer using permanent keys, and the inner layer using ephemeral keys.
|
||||||
|
// The keys themselves are exchanged as part of the connection setup, and then omitted from the packets.
|
||||||
|
// The two layer logic is handled in peers.go, but it's kind of ugly.
|
||||||
type wire_linkProtoTrafficPacket struct {
|
type wire_linkProtoTrafficPacket struct {
|
||||||
Nonce boxNonce
|
Nonce boxNonce
|
||||||
Payload []byte
|
Payload []byte
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Encodes a wire_linkProtoTrafficPacket into its wire format.
|
||||||
func (p *wire_linkProtoTrafficPacket) encode() []byte {
|
func (p *wire_linkProtoTrafficPacket) encode() []byte {
|
||||||
bs := wire_encode_uint64(wire_LinkProtocolTraffic)
|
bs := wire_encode_uint64(wire_LinkProtocolTraffic)
|
||||||
bs = append(bs, p.Nonce[:]...)
|
bs = append(bs, p.Nonce[:]...)
|
||||||
@ -387,6 +289,7 @@ func (p *wire_linkProtoTrafficPacket) encode() []byte {
|
|||||||
return bs
|
return bs
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Decodes an encoded wire_linkProtoTrafficPacket into the struct, returning true if successful.
|
||||||
func (p *wire_linkProtoTrafficPacket) decode(bs []byte) bool {
|
func (p *wire_linkProtoTrafficPacket) decode(bs []byte) bool {
|
||||||
var pType uint64
|
var pType uint64
|
||||||
switch {
|
switch {
|
||||||
@ -403,6 +306,7 @@ func (p *wire_linkProtoTrafficPacket) decode(bs []byte) bool {
|
|||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
|
// Encodes a sessionPing into its wire format.
|
||||||
func (p *sessionPing) encode() []byte {
|
func (p *sessionPing) encode() []byte {
|
||||||
var pTypeVal uint64
|
var pTypeVal uint64
|
||||||
if p.IsPong {
|
if p.IsPong {
|
||||||
@ -421,6 +325,7 @@ func (p *sessionPing) encode() []byte {
|
|||||||
return bs
|
return bs
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Decodes an encoded sessionPing into the struct, returning true if successful.
|
||||||
func (p *sessionPing) decode(bs []byte) bool {
|
func (p *sessionPing) decode(bs []byte) bool {
|
||||||
var pType uint64
|
var pType uint64
|
||||||
var tstamp uint64
|
var tstamp uint64
|
||||||
@ -452,6 +357,7 @@ func (p *sessionPing) decode(bs []byte) bool {
|
|||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////////
|
||||||
|
|
||||||
|
// Encodes a dhtReq into its wire format.
|
||||||
func (r *dhtReq) encode() []byte {
|
func (r *dhtReq) encode() []byte {
|
||||||
coords := wire_encode_coords(r.Coords)
|
coords := wire_encode_coords(r.Coords)
|
||||||
bs := wire_encode_uint64(wire_DHTLookupRequest)
|
bs := wire_encode_uint64(wire_DHTLookupRequest)
|
||||||
@ -460,6 +366,7 @@ func (r *dhtReq) encode() []byte {
|
|||||||
return bs
|
return bs
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Decodes an encoded dhtReq into the struct, returning true if successful.
|
||||||
func (r *dhtReq) decode(bs []byte) bool {
|
func (r *dhtReq) decode(bs []byte) bool {
|
||||||
var pType uint64
|
var pType uint64
|
||||||
switch {
|
switch {
|
||||||
@ -476,6 +383,7 @@ func (r *dhtReq) decode(bs []byte) bool {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Encodes a dhtRes into its wire format.
|
||||||
func (r *dhtRes) encode() []byte {
|
func (r *dhtRes) encode() []byte {
|
||||||
coords := wire_encode_coords(r.Coords)
|
coords := wire_encode_coords(r.Coords)
|
||||||
bs := wire_encode_uint64(wire_DHTLookupResponse)
|
bs := wire_encode_uint64(wire_DHTLookupResponse)
|
||||||
@ -489,6 +397,7 @@ func (r *dhtRes) encode() []byte {
|
|||||||
return bs
|
return bs
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Decodes an encoded dhtRes into the struct, returning true if successful.
|
||||||
func (r *dhtRes) decode(bs []byte) bool {
|
func (r *dhtRes) decode(bs []byte) bool {
|
||||||
var pType uint64
|
var pType uint64
|
||||||
switch {
|
switch {
|
||||||
@ -513,59 +422,3 @@ func (r *dhtRes) decode(bs []byte) bool {
|
|||||||
}
|
}
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////////////
|
|
||||||
|
|
||||||
func (r *searchReq) encode() []byte {
|
|
||||||
coords := wire_encode_coords(r.coords)
|
|
||||||
bs := wire_encode_uint64(wire_SearchRequest)
|
|
||||||
bs = append(bs, r.key[:]...)
|
|
||||||
bs = append(bs, coords...)
|
|
||||||
bs = append(bs, r.dest[:]...)
|
|
||||||
return bs
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *searchReq) decode(bs []byte) bool {
|
|
||||||
var pType uint64
|
|
||||||
switch {
|
|
||||||
case !wire_chop_uint64(&pType, &bs):
|
|
||||||
return false
|
|
||||||
case pType != wire_SearchRequest:
|
|
||||||
return false
|
|
||||||
case !wire_chop_slice(r.key[:], &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_coords(&r.coords, &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_slice(r.dest[:], &bs):
|
|
||||||
return false
|
|
||||||
default:
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *searchRes) encode() []byte {
|
|
||||||
coords := wire_encode_coords(r.coords)
|
|
||||||
bs := wire_encode_uint64(wire_SearchResponse)
|
|
||||||
bs = append(bs, r.key[:]...)
|
|
||||||
bs = append(bs, coords...)
|
|
||||||
bs = append(bs, r.dest[:]...)
|
|
||||||
return bs
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *searchRes) decode(bs []byte) bool {
|
|
||||||
var pType uint64
|
|
||||||
switch {
|
|
||||||
case !wire_chop_uint64(&pType, &bs):
|
|
||||||
return false
|
|
||||||
case pType != wire_SearchResponse:
|
|
||||||
return false
|
|
||||||
case !wire_chop_slice(r.key[:], &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_coords(&r.coords, &bs):
|
|
||||||
return false
|
|
||||||
case !wire_chop_slice(r.dest[:], &bs):
|
|
||||||
return false
|
|
||||||
default:
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
Loading…
Reference in New Issue
Block a user