Personal view on RFC3869 by Remco Hobo Preface: In this article I will give my opinion on the recently published RFC (Request for change) 3869. This RFC sounds the alarm clock involving the recent developments with the current internet. It discusses new problem area.s that are in need of attention. In the past, research for these .problem areas. was carried out by universities. Nowadays it seems that everybody is busy looking into small areas of the internet technology and is neglecting the big picture. This article sounds the alarm bell about these developments, and discusses these problem areas before they become a real threat to the continuity of the internet and its services. I will comment onto paragraphs 3.2 to 3.13 Paragraph 3.2 Naming 3.2.1 Domain Name System (DNS) This paragraph describes the impact on performance on a DNS server that can be expected with the implementation of IPv6. Nowadays with IPv4, UDP is an effective way to ask for a DNS entry because it is relatively stateless. With IPv6, this process will be much more complicated, involving the mandated Path MTU discovery. This means a significant increase in load for the DNS server. Studies into alternate means of transportation besides UDP are needed to relieve some of this load from the DNS servers. Also, investigation into new ways of caching DNS entries is needed. My opinion is there should first be a study into how much this will impact the performance of the DNS server. It might be more economic to just increase the number of DNS servers. Also, the increase of DNS traffic should be investigated when switching to IPv6. This comes to the next thing mentioned in the article; there should be one uniform way of measuring DNS performance. You cannot judge the performance of something if you don.t know its acceptable values. This is one of the keystones of a uniform standard and should have been there from the start. Security of DNS is also of high priority, but it might impact on performance dramatically when for instance the DNS server has to identify itself with certificated etc. before processing a DNS request. There has to be a way to easily identify a DNS server without having a great impact on performance. A study for this has alto to take place. 3.2.2 New Namespaces This paragraph doesn.t really give any real advice of information. It just mentions investigating the need for new namespaces. A study in this subject is needed to get a clearer picture on it. Paragraph 3.3 Routing 3.3.1 Inter-domain Routing The current operational inter-domain routing system has between150,000 and 200,000 routing prefixes in the default-free zone. This number can grow up to 300,000 in the near future. Some people have outed worries about the routering protocol hitting a fundamental algorithmical limitations. If this is the case, the integrity of the internet may become impaired. A study has to take place so see if these worries are solid. 3.3.2 Routing Integrity This paragraph discusses the issue of the actual routering data that isn.t secured. This should be done because now routes can be forged with packet tampering. The problem with authentication is you need a shared trusted authority, but what server can you really trust on the internet? This is a problem that goes back to the first days of authentication and seems to be a fundamental problem. 3.3.3 Routing Algorithms The internet now chooses mostly uses the SPF algorithms. This means it will take the path of the least hops. This path naturally isn.t always the shortest one. Geographical distance should also be taken into account as well the cost of a hop. The same goes for packets that need a low trip time, they need to be treated much differently then packets that have to arrive, e.g. low packet loss. Research into a way to manage different packets should be carried out. 3.3.4 Mobile and Ad-Hoc Routing This paragraph discusses the lack of a real standard for mobile devices such as laptops, cars, airplanes etc. These devices need to be treated much differently because they will not be connected to the same network all the time, they might even not be connected at all from time to time. No real standard has been made for this. The link layer standards as WiFi and GPRS are in place but higher layer protocols aren.t well-suited for these devices yet. Paragraph 3.4 Security 3.4.1 Formal methods The accomplished standards on the internet are not reviewed and maintained enough. As a result of this, security leaks and abnormalities aren.t being discovered fast enough, forming a real security issue. Some automated tools for applying formal methods to cryptographic algorithms and/or protocols could be quite helpful to discover abnormalities before they pose a serious risk. 3.4.2 Key management For non-hierarchical key management architectures, e.g. flat systems, that need a secure connection, there is no easy to use key management system available, the widely use IKE (Internet Key Exchange) system is much to complicated, there is a need for a new easy to use key management system. In my opinion this is a really important thing. When authentication software is to complicated, it will not be used, or it will be used the wrong way, so the need for a good way to handle this is great. 3.4.3 Cryptography Most governments and commercial companies give no or little information about their studies for stronger cryptography. This is logical but it will leave the open source community with inadequate knowledge about them. Open source research is needed for this. 3.4.4 Security for Distributed Computing This paragraph shortly describes the same problem as previous paragraph, but narrows down to mobile devices. Research on for flat-authentication between mobile devices is also in need of research. 3.4.5 Deployment Considerations in Security The S-BGP protocol is a theoretical protocol that is in need of further studies to deploy it in the real world. Unfortunately this study hasn.t been carried out yet. It may prove a great asset in realizing a general, easy to install, and easy to manage security system. This is what this whole chapter comes down to, developing a general, unhackable, reliable security system that the internet so desperately needs. 3.4.6 Denial of Service Protection Nowadays, a large number of computers have been affected with malicious software that can mount an attack on internet servers when the author of the software wants it. This threat has become much greater the last years, because worms and viruses have grown exponentially. Research is needed to make sure new services and software is resilient to these DoS and DDoS attacks. 3.5 Network Management 3.5.1 Managing Networks, Not Devices Nowadays the SNMP protocol is used for viewing status of a device and to alter its parameters, but it this software only looks at one device instead of looking at the big picture. Software has to be developed so you can manage the whole network instead of one device at the time. This also has to be done decentralized, so you can change the network from anywhere. This software would be a real asset to administrators and will help to respond quickly to new requirements and demands. A system as this should be developed as fast as possible as it may reduce management costs dramatically. 3.5.2 Enhanced Monitoring Capabilities Paragraph 3.5.2 discusses the monitoring capabilities that are now into being used. This paragraph is already covered with the previous one and is in my opinion redundant. 3.5.3 Customer Network Management Here management of the network is discusses for customers. A tool should be made so that users can troubleshoot their internet connection. This might be nice but in my opinion users will not use it. Most users will even call the administrator when the printer isn.t functioning; not knowing someone turned it off. Users will not use this in my opinion. 3.5.4 Autonomous Network Management Autonomous network management can be a great asset as well, if a network can manage itself better; become intelligent this might mean reliability can increase dramatically. In my opinion research in this sector can be very rewarding but the development can prove difficult. 3.6 Quality of Service 3.6.1 Inter-Domain QoS Architecture QoS is a way to check if packets where handled within the set thresholds. This means it becomes visible how the network performs. This might be a great asset if not for two fundamental problems: The QoS protocol is vulnerable for DoS attacks. The protocol will have a huge impact on performance. For these two reasons more study is needed to create a new protocol that is less vulnerable and will me less of a load for the routers. 3.6.2 New Queuing Disciplines Active queuing may decrease latency at routers. It may be worth investigating. The article doesn.t provide much information on this. 3.7 Congestion Control This chapter describes the current congestion control mechanisms that are in place today. They have been operational since the internet came to be and are one of the cornerstones of the reliability for the internet. Of course, since the day they where implemented a lot has happened. Streaming media was introduced and other high bandwidth applications came to be. The demands for new congestion control protocols are growing. New research is necessary to find out if it can be an asset to let some layers of the OSI model interact more. This research can yield better performance, less latency and better reliability. 3.8 Studying the Evolution of the Internet Infrastructure A fundamental study of accomplishments and demands may prove to be of great importance. A study to why it is so hard to implement new technologies like IPv6, QoS and the like may be very interesting. Also, this chapter describes the chicken-and-egg problem of not implementing a new technology because there isn.t a commercial advantage to it because there aren.t enough customers. While on the other hand, there aren.t enough customers because the technology isn.t implemented yet. Also, it is wise to investigate if implementing new technologies doesn.t impact its core strengts like the current degree of global addressability of hosts, end-to-end transparency of packet forwarding, and good performance for best-effort traffic. This might prove to be quite a valuable study as well. 3.9 Middleboxes Middleboxes are nowadays mostly used for commercial uses but they can be of much more value. This chapter briefly outs the option to investigate if they can be of value for long-term solutions. In my opinion, Middleboxes haven nearly reached their potential, at the co locator I worked in Curacao, we looked into using a Middlebox called .riverhead. which was a highly advanced Middlebox that filtered out malicious traffic, would automatically reduce bandwidth to IP ranges if they where being attacked and the like. These boxes might be the future of defending against internet attachs and if places on strategic places in the global internet may contribute on reducing fraudulent traffic. 3.10 Internet Measurement A view of the status of the current internet might be valuable as well. It can prove to be quite valuable knowing for instance how many nodes are connected, what the performance is, what type of traffic is being processed, total bandwidth of nodes etc. In my opinion this sounds nice, but it is impossible to get these figures. It could take years to collect all the data from all the ISP`s and such, and by that time the data has long been outdated. 3.11 Applications This chapter discusses specific application problems like spam. It can be useful to map the total amount of spam emails, where they come from and what the impact is of all these unwanted messages. Studies like these have been carried out be many companies, but no real solution has been found for the spam problem. The senders of spam become smarter by the day and will add random characters and such to be ahead of the .spam cops.. It can prove quite hard to beat this increasing problem. 3.12 Meeting the Needs of the Future Of course, research into what kind of infrastructure and hardware the internet needs in the future is of outmost importance. Not many words are spent on this in the RFC as it stands on itself. 3.13 Freely Distributable Prototypes This last chapter describes the advantages of open prototyping. If somebody develops a new standard, the standard will be much faster implemented when you provide a prototype with it. Standards like Ipv4 have been implemented quite fast, also because a prototype was provided with it. I agree with this but I think another advantage is that if you deliver a prototype, the different implementations are more compatible. If you look at a standard, which has excellent specifications provided with it, you will find different companies or organizations will interpret them differently, so making the standard not totally compatible. Conclusion I found this article to be quite interesting and well formulated. It gives a clear view of all problem areas. It is a well-written memo, giving an insight in current developments. I have learned a great deal from it and found this exercise to be quite time consuming but rewarding. Themes I found especially interesting are the security themes. It is of outmost importance to get security right. The internet is performing satisfactory for most applications in its present form but the security seriously lacks. Also, defenses about DDoS and Dos attacks are very important. I think the article is complete, I have nothing to add to it.