Thursday, October 21, 2010

Super-operator altar



10 trillion times a million billion times, never billion ... ...

When our high-performance computing repeatedly record, these figures in people's eyes, is only a symbol of national scientific and technological strength. They are high above, nothing to do with general business users in and can use the word "high-performance computing platform" in addition to the state special department that is large-scale industrial projects, such as large oil field, a large aircraft manufacturing.

This situation appears in the Shanghai Supercomputer Center have changed.

December 2000, the Shanghai Municipal Government for the construction of the Shanghai Supercomputer Center China. As the Super Computer Center is responsible for local government and advance human consciousness is defined Shanghai Super Computer Center became the "public-oriented computing platform", more and more users to high-performance computing with the "high touch."

Public computing platform development as a city an important symbol of modernization, to pierce various sectors of the industry chain, so that more human resources, capital, technology, rapid flow of up to bring huge economic and social benefits. 10 years, increasingly strong demand, as well as cloud computing, Internet of things, triple play and other industries the emergence of new opportunities, quickly ignited the passion of large-scale public computing platforms, the original service in the background of "computing platform" have sprung up to front, out of control.

Currently, Beijing, Shenzhen, Tianjin, Shenyang, Wuhan, Guangzhou, Jinan, Chengdu, Changsha ... ... cities are invested heavily in building Super Computer Center, from 10 trillion times a million to a million billion billion times, and even absolutely billion of large-scale super-computing centers blossom everywhere in the country, some cities is not only one.

Application of high performance computing altar floor, is a sign of progress in IT industry, the development of public computing platform is a regional economic inevitable choice for sustainable development. However, over the face of surging wave count, we ask:

Really need so many ultra-count center?

These expensive computing platform can really play worth it?

Who are these super machines in operation, who is in the use of ultra-ICC?

To this end, our reporter in-depth Shanghai, Chengdu, Lanzhou, trying to uncover the mystery of ultra-ICC.

Last June, officially approved by Ministry of Science agreed to establish a National Supercomputing Center in Shenzhen by the state investment 200 million yuan, was completed in 2010 in Shenzhen by the end of ten million billion calculations a super computer. Almost at the same time, Tianjin Binhai New Area and University of Defense Technology Cooperation Agreement signed by the Ministry of Science, Tianjin Binhai New Area, National Defense University jointly invested 600 million yuan, in the Binhai New Area of Tianjin build a national supercomputing center, developed petaflop supercomputer .

Recently, there were indications that Beijing will build a million trillion times larger Supercomputing Center, related issues are in preparation. In addition, Guangzhou, Shenyang, Chengdu, Changsha, Wuhan and other cities in almost all new or expansion of the Super Computing Center, targets are in 1,000,000,000,000,000 or even million trillion level.

As if overnight, the National Supercomputing Center at the blossom everywhere. We really have such a large computing needs? Do this, "Computer World" reporter visited Shanghai, Chengdu, Gansu and other places representative of the Supercomputing Center, which some have been successfully operating, and some still under construction , and these super-computer centers are facing the same problem - how the altar, into the ordinary.

Subsidize the user to find

90's of last century, people have a certain high-performance computing knowledge. Shanghai Meteorological Bureau found that the existing computing power has been unable to meet everyday computing needs, therefore, they are ready to purchase a new "big machine." When the program reported to the government procurement sector, the Shanghai Municipal Government to consider buying a "big machine" too costly, while the Weather Bureau's use of the frequency is not high, idle time can cause great waste of resources. If this high-level computer can be used as public facilities available to more users, you can play more effective. Therefore, the Shanghai Municipal Government put forward at that time with the "big machine" set up a public service platform, and thus created the concept of super-computing center. "We began to build in 1999 located in Zhangjiang Hi-tech Development Zone in Pudong, the Shanghai Supercomputer Center, in early 2001, the official carrier for the community." Shanghai Super Computing Center Director, Xi Zili introduced.

Xi Zili Supercomputing Center is a service of the first group of people in this industry for over ten years. When it comes to Shanghai Super Computer Center in the early expansion of the user operating difficulty, Xi Zili are marvelous: "It was a large amount of money the Government into operation this Supercomputing Center, if the use is not up on the mean failure, it means a lot of waste of capital, because hardware was already there are not being paid the. "The most difficult process early, is the need to find their own customers. "At first, I want to visit every week, 3 to 5 users to understand their background, operations and needs, to attract them to the center." Xi Zili said, sometimes even subsidize the user to use the machine.

The beginning, the Shanghai Super Computing Center was selected as the industrial users commercial aircraft company, but by that time coincided with the Commercial Aircraft Corporation is not the economy. In the mid-90s, commercial aircraft company bought IBM4381 big machine, but because the company limited funds, the project is not over, these machines have been arrayed not functioning. Although they want to move into the center, but neither money to pay time fee, not many programs can do. Upon learning this, Xi Zili said to each other, willing to pay 200,000 yuan in subsidies as part of its move into the center, This made the business.

In addition to few users, the capacity of the machine itself is also a constraint Shanghai Super Computing Center was a major development. Because the first batch of service in Shanghai Super Computer Center is a large machine divinity series supercomputer, due to the compatibility of the machine, limit the scope of potential applications and users. Until 2004, the Shanghai Super Computer Center ushered in the dawn of open architecture 4000A, the Twilight series and the machine architecture, software, operating systems are open and standardized, which means that the system can and international compatibility of some common software better . Compatibility issues resolved, since 2004, the Shanghai Super Computer Center users an unprecedented development. Last year, the Shanghai Super Computing Center has introduced a series of Dawning 5000A supercomputer to calculate the scale of 230 trillion. Today, Shanghai Super Computer Center of the users have over the various sectors.



Large-scale hardware computing platforms require matching software program

At present, Shanghai Super Computer Center is the operator of the most successful public computing platforms. Unfortunately, the state has invested dozens of ultra-ICC, and now is running out. In addition to super-computing and Shanghai Super Computing Center of the few survival, some on the verge of collapse.

Who is the "super user"

"Only together can really play a role, reflecting the value of public computing platform, which is also government investment to establish ultra-ICC mind." Xi Zili Moreover, the "Computer World," told reporters.

Today, many are building super-computer centers have understood this truth. Although the petaflop-scale supercomputer center in Tianjin has not officially put into use, but the center's leadership among the users already running.

Tianjin Super Computer Center to target the super-computing on demand a strong weather, oil, medicine, architecture and other fields. Therefore, the state director of the Center for Supercomputing Liu Guangming Tianjin Tianjin Meteorological Bureau has research, the Chinese Academy of Architecture building software, the Joint Research Institute of Tianjin International Biomedicine, CNOOC, and Geophysical Research Institute of Shengli Oilfield. He found that these "super big operators" and is very different than in the past, had no money, no one's company, now not only no shortage of talent, and some even built their own computer center, they are super-computing platform for the public What will generate strong demand?

"Demand is still very large." As Xi Zili said, the small-scale processing operation can be completed in their own computing centers, large-scale computing and use to large commercial software project, it is necessary to large-scale public computing platform to run , because only Supercomputing Center have enormous computing power and software capabilities.

Journalists in the survey found that ultra-ICC more than 80% of users are research institutes and universities, while the other 20% are used for industrial production.

Nanjing, BGP is a typical Shanghai Super Computer Center industrial-type users. In fact, Nanjing BGP also has its own computing center, but they often have more large-scale computing needs. One year, Nanjing BGP to participate in an international bid, must within one week on their results to each other, the scale of its computing centers are not enough to complete the operation of this size, so they found a Shanghai Super Computer Center. Xi Zili personally lobbied other users with savings 1000 CPU released to the project in Nanjing BGP, so that they get the results in time to bid. "If there is no such large-scale public computing platform, enterprises will miss many of these large-scale international projects." Xi Zili said.

Therefore, the demand for supercomputing business is very large. Super Computer Center in Gansu Shanghai Super Computer Center, though the much smaller scale, but the value of their customers were well reflected.

Super Computer Center in Gansu Province, according to Director of Hu Tiejun introduced Super Computer Center in Gansu during the construction process and take the side of the building while using the strategy. "Although we do not center, but it focus on forward-looking technology, expansion in 2004 when playing on the plan to create a breakthrough in high performance computing to the next generation IPv6 Internet and data exchange center for the auxiliary network application platform and minutes to complete the construction year. "

Now, Gansu Super Computer Center already has 41 trillion times the fleet, 21 sets of commercial software, 13 sets of shared source software, but also have included Lanzhou University, Lanzhou University, Lanzhou Jiaotong University and other universities, Gansu Provincial Meteorological Bureau, Chinese Academy of Sciences Cold and Arid Regions Research Institute and other government departments and research institutes, and enterprises including wide range of users. In these users, the most targeted, best embodies the central key support at this stage direction, is the cooperation with Lanzhou University, a large-scale virtual screening for drug research.

Drug development is a time-consuming, investing in research work. The traditional drug development process is costly, time, long, high rate of elimination, the average cost of a new drug research and development needs of over one billion U.S. dollars, took about 10 years, about 90% of drug candidates in clinical period be eliminated. Virtual screening in drug stage, should the millions or billions of molecular simulation, the traditional test method is not only a huge workload, but also consume a long time. "The super computer will show the unparalleled advantage." According to Hu Tiejun introduced experimental supercomputer simulation with a very short time, just a few weeks time could be eliminated in a large number of compounds that do not meet the requirements, the scope of screening should much larger than the traditional test, experimental results are more accurate, can greatly improve the efficiency of drug development, considerable savings in R & D funding.

Shanghai Super Computing Center of the Xi Zili, Gansu ultra-ICC Hu Tiejun said that they have achieved the current application of 70% to 80%, in a sense, basically at full capacity. However, there are still doubts the industry, urban construction in the country so many quadrillion times as much, or even 10 thousand trillion times the size of the super computer center, you can really come in handy?

Weak links in software applications

Currently, the application of ultra-ICC is far below the scale of the hardware scale. So many of the industry that "if the application can not keep up, causing the machine and then not as big."

"If there is no advance of hardware resources, making sure that no more applications." Xi Zili did not agree to this view. He believes that it is difficult to define all the urban construction Supercomputing Center's original intention and ability to not rule out that some local governments to follow suit, the image projects out of psychological, "but the public do more advanced computing platforms. In the supercomputing field, needed to drive the hardware development of the industry. "

Xi Zili that only the hardware first developed, these applications can keep pace. If the current super-computing platform to reach 100 trillion times the size it possible to run the 100 trillion times the software. "So supercomputer is certainly necessary, but not too much ahead, as it will cause a waste."

About to build a petaflop supercomputer, "Milky Way," is Professor Wen-Hua Dou National Defense University also believes that human needs for high-performance computers is not enough, every step needs, from basic theory to practical application of technology, material technology and innovation and beyond.

First, the public computing platform promoting the development of high-tech industry has far-reaching significance. The high ground is the high-tech industry high-tech product development capability, the design of new materials, biotechnology, new medicine and environmental protection and comprehensive utilization of resources, the high performance computing can play a significant role. Second, the public computing platform innovation vector as one of the modern service industry, will certainly fueled the development of independent innovation of enterprises. "The Internet and telecommunications networks, wide network integration is an inevitable trend, network integration and development, will effectively promote the new computing models, the new service model for the formation and development." DOU Wen-hua said.

However, many in construction and have built small and medium regional based public computing platforms, as well as parts of a government campaign of the planned petascale supercomputing centers, in addition to slightly stronger than the strength Shanghai Super Computing Center, and many provinces and municipalities in public computing platform infrastructure, support services, operation mechanism have not been systematically developed. In short, it is the application part is still quite weak.

"Sometimes, the domestic Super Computing Center is very tragic." Xi Zili told reporters. Although the Shanghai Super Computer Center of the computing scale is 200 trillion times, but usually reach two trillion times a day, 10 trillion times the size of very good use, and most of the time also, but 50 trillion times.

Why is this so? "Because my super-operator in the field of software levels are poor, do not increase the system's application of the scale." Xi Zili said.

For example, the United States, the advanced field of materials analysis 10 times higher than China or even a hundred times, its ultra-ICC very large scale operations, the basic are 5 100 000 CPU to run, but our country only a few hundred or at most a day 1000 CPU running simultaneously. "Is not worse than others we have available hundreds of times, but our high-performance computing software in the very backward, parallel computing capacity is also very poor." Chengdu Cloud Computing Center Director Wang Jianbo resignation. Even if the construction of the huge hardware computing platforms, lack of a strong software support, "these machines are no different from a pile of junk." Jian-Bo Wang said.

In this case, my ultra-ICC can only buy commercial software developed countries, and some very expensive proprietary software, and even build a hardware platform than the more expensive, the average ultra-ICC simply can not afford. There are some high-tech products, as restrictions on foreign export policy, even if the money will not buy.

According to related person from the past 10 years, although the country has invested in software, a lot of money, but the effect is not significant. In addition to weather, oil and other several pillar industries have a certain software R & D capabilities, large-scale, commercial software is still very little.

The reason, Super Computer Center is an interdisciplinary intersection, which involved extensive industry, its "mother" will be more than one, the software and hardware resources is not in charge of the same ministries, which do not result in development of software understand the hardware architecture, hardware R & D people do not know the characteristics of large-scale software, "This is one of the most fatal problem." Xi Zili said.

However, in the "nuclear high base" and after the introduction of major national science and technology projects to the core of electronic devices, high-end general chips and basic software become bigger and stronger, the state appointed the appropriate body to coordinate the work of ministries. "This is the development of China's public computing platform is a good thing, we hope the integration of industry and more in-depth information." Jian-Bo Wang said.

Commentary

Super Computer Center requires government guidance and help

Super Computer Center is a comprehensive platform for interdisciplinary nature, its development can lead server, software, chips, machinery manufacturing, and other related industries to progress together. Meanwhile, the ultra-ICC also adopted the final results to show progress in all walks of life: the first supercomputer to find oil, the first supercomputer for weather forecasting for the first time with a super-computer analysis of gene ... ... supercomputing affect all walks of industry, it has become the core countries in science and technology competitiveness.

Although we are pleased to see that everywhere in the brewing of large super-computer center, but in a way, the government should guide the conduct throughout, and somewhat in size and geographical planning. Super Computer Center is different from the small and medium construction as possible, blossom, it is a huge project, costs money, time-consuming, labor-intensive. According to reports, Shanghai Super Computer Center of electricity a year reached 12 million yuan. Therefore, the Government should be in power, manpower, policy areas such as auxiliary super-computer center operations.

In addition to high-end applications in the most backward areas of level, our public computing platforms, there are still many problems.

First, the uneven geographical distribution. This uneven distribution of resources, causing a dilemma - there is a demand of the user difficult to obtain resources, valuable resources are faced with idle and waste.

Second, the construction of a lack of unified planning and functions. Super Computer Center under the various different departments related to repeat invest in economically developed regions, and the parts of public computing platform services location ambiguity, the lack of a specific subject area strengths.

Third, do not take cross-disciplinary research in the field service functions.

Fourth, do not improve the industrial chain of high-performance computing. Directly serve the public end-user computing platform, a concrete understanding of user needs, application features and technology trends. Public computing platform is high performance computing hardware and software vendors the main users of public computing platform as a key link in the industry, must maintain the entire eco system of joint development.

However, these problems are not their own super-computer centers can be resolved, right in the hands of their master. For example, the state can plan to control Super Computer Center's construction site, the future super computer centers to form a wider network coverage, can users of radiation to the whole country; In addition, although there are inherent Super Computer Center's "瀹為檯 鎿嶄綔 You Shi" but not "education" and the qualification, in this regard, policies should be introduced to Super Computer Center to train a large number of industry professionals.

In short, public computing platform, which reflects the computing power of our country, but also the equivalent of a high-performance core of the industrial chain, and its progress should be attached great importance to the relevant agencies. (Text / Liu Lili)

Link

Trend of the civilian population of the world's high-performance computing

Currently, high-performance computers to solve face scalability, reliability, power, balance, programmability, and management complexity of such challenges, the industry is to promote high-performance, multicore, virtualization and other technologies. A worldwide movement to high-performance computing has been opened civilians, which we call "pan-performance computing era."

From the global perspective, the public computing platforms deployed in the developed countries already have a large number of which the United States, the largest number. In November 2008 announced the "Global Top 500 high performance computers," shows that 58.2% of the machines installed in the United States, 66% of the total computing power controlled by the United States; followed by the UK, with 9% and 5.4% of the machines The total computing power.

In the United States, government agencies, public computing platform is a major supporter. The most well-known in the United States including the San Diego Supercomputer Center Supercomputer Center (SDSC), the National Center for Supercomputing Applications (NCSA), Pittsburgh Supercomputing Center (PSC), Lawrence Livermore National Laboratory (LLNL ), the U.S. Argonne National Laboratory (ANL) and Oak Ridge National Laboratory (ORNL) and so on.

In the EU, each of the framework of EU research and technological development projects, both invested heavily in high-performance computing. Britain is Europe's largest supercomputing users, mainly Edinburgh Parallel Computing Centre (EPCC) and the University of Manchester academic computing services center (CSAR). Germany, the number of supercomputers installed very basic and the United Kingdom, 3 National Supercomputing Center are the University of Stuttgart High Performance Computing Center (HLRS), John von Neumann Institute for computing (NIC) and the calculation of Munich Leipnitz Center (LRZ). France followed by Germany and the UK, the largest supercomputers run by the French Atomic Energy Commission. Other European countries have had less number of super-computing center. Overall, the European Super Computing Center in facilities, operating model, customer support and application in the fields have a lot of features.

銆??鍦ㄦ棩鏈紝杈冨ぇ鐨勮秴绾ц绠椾腑蹇冩湁鏃ユ湰鍦扮悆妯℃嫙鍣ㄤ腑蹇冦?鏃ユ湰鐗╃悊涓庡寲瀛︾爺绌舵墍(RIKEN)銆佹棩鏈浗绔嬪厛杩涘伐涓氱鎶?爺绌舵墍锛屼互鍙婃棩鏈浗瀹跺畤鑸疄楠屽(JAXA)寤虹珛鐨勪腑蹇冦?

Currently, the developed high-performance computing research and industry input intensity of Ju Tai, Erju into Juyouchixu Xing, Shi Jian Zhang span, which made them high-performance computing research and technology Fangmian have a good foundation has accumulated a wealth of experience, gather a group of professionals. At the same time, high-performance computing on the contribution of national economic construction is also rising, the development of public computing has become a virtuous circle.







相关链接:



JSP and EJB interaction



Qingdao Shengli Oilfield nursing home use Maxima



Understanding OF the process of DB2 Universal Database (2)



The Economic Observer: the power behind the Green Dam Game Event



Premier Network Monitoring



YouTube To WMV



AVI to MKV



STUDY Notes of JSP tags



CG City Wizard - Wizard is the beginning (1)



Powerpoint Are Three Ways To Insert Video



Farewell babyface, iSee 1 minute to create face-lift effect



"Housekeeper" at your fingertips "groom" first To "trial marriage"



Directory Astrology Or BIORHYTHMS Or Mystic



Liu Liu Jiren, etc. on new overseas opportunities: Pie or trap



Report Firewall And Proxy Servers



MATROSKA to MPEG



Monday, October 18, 2010

DDOS DDOS tracking the introduction and





Chain-level test (Link Testing)

Most of the tracking technologies are starting from the closest to the victim's router, and then began to check the upstream data link, until you find the origin of attack traffic hair. Ideally, this process can be recursive implementation of the attack until you find the source. This technique assumed attack remains active until the completion of tracking, it is difficult after the attack, intermittent attacks or attacks on the track adjustment to track. Including the following two chain-level testing:

1, Input debugging

Many routers offer Input debugging features, which allow administrators to filter certain number of exit data packets, and can decide who can reach the entrance. This feature was used as a traceback: First of all, victim was attacked in determining when all packets from the description of the attack packet flag. Through these signs in the upper reaches of the outlet manager configuration suitable Input debugging. This filter will reflect the relevant input port, the filtration process can continue in the upper class, until to reach the original source. Of course, a lot of this work by hand, some foreign ISP tools for the joint development of their network can automatically follow-up.

But the biggest problem with this approach is the management cost. Multiple ISP links and cooperation with them will take time. Therefore, this approach requires a lot of time, and almost impossible.

2, Controlled flooding

Burch and Cheswick proposed method. This method is actually manufactured flood attacks, by observing the state of the router to determine the attack path. First of all, there should be an upper road map, when under attack, they can start from the victim's upstream routers in accordance with road map on the upstream routers to control the flood, because the data packets with attack-initiated packet router also shared, thus increasing the possibility of the router packet loss. Through this continued up along the road map for, we can close the source of attacks launched.

This idea is very creative but also very practical, but there are several drawbacks and limitations. The biggest drawback is that this approach is itself a DOS attack, it will also carry out some of the trust path DOS, this shortcoming is also difficult procedure. Moreover, Controlled flooding requires an almost covers the entire network topology. Burch and Cheswick also pointed out that this approach could be used for DDOS attacks on the track. This method can only be effective on the ongoing situation in the attack.

CISCO router is CEF (Cisco Express Forwarding) is actually a kind of chain-level test, that is, to use CEF up to the final source, then the link on the router had to use CISCO routers, and support CEF. Must be Cisco 12000 or 7500 series router has. (Do not know how, do not check the latest CISCO document), but the use of this feature is very cost resources.

In the CISCO router (ip source-track support for the router) the IP source tracking in order to achieve the following steps:

1, when the purpose was found to be attacked, opened on the router the destination address of the track, enter the command ip source-track.

2, each Line Card was created to track the destination address specific CEF queue. The line card or port adapter with a specific ASIC for packet transformation, CEF queue is used to package into line card or port adapter's CPU.

3, each line card CPU collect information to track the purpose of communication

4, the timing data generated by export to the router. Be realistic summary of the flow of information, enter the command: show ip source-track summary. Each input interface to display more detailed information, enter the command show ip source-track

5, statistical tracking of IP addresses is a breakdown. This can be used to analyze the upstream router. You can close the current router IP source tracker, enter the command: no ip source-track. And then re-open at the upstream router on this feature.

6, repeat steps 1 through 5, until you find the attack source.

This almost answers securitytest to mention the bar.

Logging

Through this method to record the main data packet router, and then through the data collection techniques to determine the path packets through. While this approach can be used to track the data after the attack, it also has a Ming Xian's shortcomings, such Kenengyaoqiu Daliang of Zi Yuan (or sampling), a large number of data of Syndicated news Bingjuduifu problem.

ICMP tracking

This approach mainly rely on self-generated ICMP router tracking information. Each router has a very low probability (for example: 1 / 200000), the contents of the packet will be copied to an ICMP message in the package, and contains the information near the source address of the router. When the flood attacks beginning, victim can use ICMP messages to reconstruct the attacker path. In this way comparison with the above description, there are many advantages, but there are some disadvantages. For example: ICMP traffic may be filtered from the ordinary, and, ICMP messages should follow the same input debugging feature (the packet with the data packet input port and / or to get the MAC address associated capacity) related, but that in some router has no such function. At the same time, this approach also must be a way to deal with an attacker could send a forged ICMP Traceback message. In other words, we can approach this way, used in conjunction with other tracking mechanisms to allow more effective. (IETF iTrace)

This is the yawl that the IETF working group to study the content, when I made some comments to the Bellovin, but did not get an answer. For example:

1, although a random 1 / 20000 to track packages sent, but the package for forgery TRACEBACK cases, the efficiency of the router will have some effect.

2, track packages, and can not solve the counterfeit problem of authentication. To determine whether it is fake because the package, you must go to certification, and increased workload.

3, even with NULL authentication, also serve the purpose of (a certified case). And will not be much affected.

4, itrace purpose is to deal with the original DOS source of the problem of deception, but now the design seems to make us more concerned about the path and not the source. Is the path is more than the source of our problem to solve DOS useful?

So, there is a bunch of issues that I think iTrace will face the difficult issue.

Packet Marking

The technology concept (because there is no practical) is to the existing agreement on the basis of changes, and changes very little, not like the idea of iTrace, think better than iTrace. There are many details of this tracking study, the formation of a variety of labeling algorithm, but the best is compressed edge sampling algorithm.

Principle of this technique is a change in IP header, in which the identification heavy domain. That is, if not used to the identification domain, then this field is defined as the tag.

The 16bit of idnetification into: 3bit the offset (allows 8 slice), 5bit the distance, and the edge of 8bit slice. 5bit the distance allows 31 routes, which for the current network is already enough.

Marking and path reconstruction algorithm is:

Marking procedure at router R: let R''= BitIntereave (R, Hash (R)) let k be the number of none-overlappling fragments in R''for each packet w let x be a random number from [0 .. 1 ) if xlet o be a random integer from [0 .. k-1] let f be the fragment of R''at offset o write f into w.frag write 0 into w.distance wirte o into w.offset else if w . distance = 0 then let f be the fragment of R''at offset w.offset write f? w.frag into w.frag increment w.distance
Path reconstruction procedure at victim v:
let FragTbl be a table of tuples (frag, offset, distance) let G be a tree with root v let edges in G be tuples (start, end, distance) let maxd: = 0 let last: = v for each packet w from attacker FragTbl.Insert (w.frag, w.offset, w.distance) if w.distance> maxd then maxd: = w.distance for d: = 0 to maxd for all ordered combinations of fragments at distance d construct edge z if d! = 0 then z: = z? last if Hash (EvenBits (z)) = OddBits (z) then insert edge (z, EvenBits (z), d) into G last: = EvenBits (z); remove any edge (x, y, d) with d! = distance from x to v in G extract path (Ri.. Rj) by enumerating acyclic paths in G


Under laboratory conditions only victim of such markers can be caught from 1000 to 2500 package will be able to reconstruct the entire path, and should be said that the result is good, but not put to practical, mainly manufacturers and ISP router support needed .

Ip traceback's been almost a practical technology and laboratory techniques, or inanimate, on the main these, although there are other.

For a long time did not engage in a DDOS against it, and the domestic like product have a black hole, previously know some foreign, such as floodguard, toplayer, radware so. Prompted by securitytest also learned riverhead, I immediately look at their white paper.

Bigfoot made since the previous main ip traceback subject, securitytest also went to the defense. DDOS problem for ip traceback and Mitigation is not the same, ip traceback main track, mainly because of DDOS spoof, which is difficult to determine the real source of attack, and if the attack is easy to find the real source, not just to deal with DDOS, attacks against the other is also helpful, such as legal issues. And Mitigation is the angle from the victims, because the victim is generally unable to investigate the whole network, to identify source, and even be able to find the source, there must be a legal means of communication or to source stop (the attack source and not the source of the attacker), this means that a lot of communication, inter-ISP, across other similar non-technical issues, it is often difficult to handle. But from the victim's point of view, have to be a solution, so we need to Mitigation.

This in turn happens to be my previous scope of the study, therefore, will say a lot. For Mitigation, in fact, the fundamental technology is to a large number of flows from the attack packets and legitimate packets will be separated out, the attack packets discarded out for the approval of the legal package. This is not, so the actual use of technology is to identify how the attack packets as possible, but as small as possible to affect the normal package. This is again to analyze the DDOS (or DOS) of the methods and principles. Basic has the following forms:

1, the system hole formation DOS. This feature fixed, detection and prevention are also easy to

2, protocol attacks (some deal with system-related, some related with the agreement). Such as SYN FLOOD, debris, etc.. Features Fortunately, the detection and prevention is relatively easy. Such as SYN COOKIE, SYN CACHE, debris can be discarded. Such as land attack, smurf, teardrop, etc.

3, bandwidth FLOOD. Waste flow plug-bandwidth, feature poor recognition, defense is not easy

4, the basic legal FLOOD. More difficult than three, such as distribution of Slashdot.

Real DDOS, usually combining a variety of ways. For example SYNFLOOD, may also be bandwidth FLOOD.

The main factors that affect the defense is to see whether the features available, such as 1,2 relatively easy to solve, some of the basic does not affect the use of the FLOOD, it can well be abandoned, such as ICMP FLOOD. However, the attack packets if contracting tools to better package disguised as legitimate, it is difficult to identify out.

Mitigation methods in general is:

1, Filter. For obvious characteristics, such as some worms, the router can handle that. Of course, the filter is the ultimate solution, as long as the identification of the attack packets, it is to filter out these packets.

2, random packet loss. Associated with the random algorithm, a good algorithm can make the legitimate packets are less affected

3, SYN COOKIE, SYN CACHE other specific defensive measures. For some regular means of defense and attack filtering. For example ICMP FLOOD, UDP FLOOD. SYN COOKIE are all to avoid spoof, at least there are three TCP handshake, so better to judge SPOOF

4, passive neglect. It can be said to be deceived is also a way to confirm that. The normal connection fails will try again, but the attackers generally do not try. So can temporarily abandoned for the first time the connection request and a second or third connection request.

5, take the initiative to send a RST. Against SYN FLOOD, such as on a number of IDS. Of course, the real is not valid.

6, statistical analysis and fingerprints. It would have been the main content, but in the end the algorithm into a dead end, because the main problem is an algorithm. Through statistical analysis point of view to get the fingerprint, and then to abandon the attack fingerprint package is also a anomaly detection technology. Very simple, but it is not easy to affect the legal package, and will not become a random packet loss. (In fact it was considered too complex, have to be a detailed analysis of the attack packets and legitimate packets, the actual need, as long as the attack packets to filter out enough, even to attack packets through, but as long as not to cause DOS on it.) This is also a lot of The main subject of the researchers, the purpose is identifying attack packets.

Now back to securitytest mentioned riverhead. On the riverhead of the technology, I have just learned from their white paper on, but based on my analysis methods did not exceed the above-mentioned range.

riverhead's core program is the detection of Detection, transfer Diversion and mitigation Mitigation, which is to detect attacks, and then transferred to the traffic guard on their products, and then guard for Mitigation.

Its implementation steps are:

Because there is no map, we first define what can be said clearly:

# Source close to distributed denial of service for the remote router routers

# Close to the victim's router to router proximal

# Riverhead's Guard equipment subsidiary subsidiary router router installed

Defense steps

1, first detected in a DDOS place and understand the victim

2, Guard Notice to the remote router to send BGP (BGP circular set in the victim's prefix, and get higher than the original priority notice BGP), said the victim from the remote router to have a new route, and routed to the loopback Guard interface, all to the victim's have been transferred to the subsidiary Guard on the router

3, Guard inspection flow, and remove one of the attack traffic, and then forwarded to the traffic safety sub router, in the back victim

The core is the Guard, technology is described in the MVP architecture white paper (Multi-Verification Process), which is five levels below

Filter (Filtering): This module contains the static and dynamic DDOS filtering. Static filtering, blocking non-essential traffic, which can be user-defined or default riverhead provided. Dynamic filtering is based on the details of behavior analysis and flow analysis, by increasing the flow of the recognition of suspicious or malicious traffic blocking has been confirmed to be real-time updates

Anti-cheat (Anti-Spoofing): This module verify whether the packet into the system to be deceived. Guard uses a unique, patented source verification mechanism to prevent cheating. Also adopted a mechanism to confirm the legitimate flow of legitimate data packets to be discarded to eliminate

Anomaly detection (Anomaly Recognition): The module monitors all anti-cheat has not been filtered and discard the flow module, the flow records with the normal baseline behavior, it is found abnormal. The idea is that through pattern matching, different from the black-hat and the difference between legitimate communications. The principle used to identify the attack source and type, and proposed guidelines for interception of such traffic.

Anomaly detection include: attacks on the size of packet size and flow rate of the distribution of packet arrival time of the port distribution of the number of concurrent flow characteristics of a high-level agreement, the rate of entry
Traffic Category: Source IP Source port destination port protocol type connection capacity (daily, weekly)

Protocol Analysis (Protocol Analysis): The anomaly detection module processing found in the application of suspicious attacks, such as http attack. Protocol analysis also detected a number of agreements misconduct.

Traffic restrictions (Rate Limiting): mainly those who consume too many resources dealing with the source of traffic.

So, in fact the most important content is in the statistical analysis of anomaly detection, but it seems not much to see from the above special place, but must have a good algorithm. Such as FILTER, actually deal with some very familiar features of obvious attacks, anti-cheating is against syn flood like this, perhaps also a syn cookie module, but may have more patented technologies. Protocol analysis should in fact is relatively weak, but can be common agreement on some specific attacks, protocol error detection and identification of some acts simply agreed to check that this is very simple. Traffic restrictions are that a random packet loss, the most helpless way, so the final level.

Because this product is mainly for Mitigation, not ip traceback. But can be determined or there are important issues, such as:

1, how to deal with the real bandwidth flood. If the router is gigabit, but attacks have accounted for 90% of the traffic, only to shed 10% of the legitimate use, the router has first started with random packet loss of the Guard. (No way, this is the bottleneck of all defense technology)

2, the real attack. The real attack is difficult or not identifiable. For example, the same basic form with the normal, if and statistics are very similar, it is difficult to distinguish. Some attacks, such as reflective of the e-mail attacks, it is perfectly legal, but very hard to classify them.







Recommended links:



WMV to Zune



Ministry of Finance denied the procurement of the Green Dam said the Ministry of Industry is the onl



ASP Script Timeout Problem When Running The Ultimate Solution



Articles About Languages Education



QuickTime To MPG



There Must Be Higher Under The High Commission Income



Emergency Response to where



Used to create automatic play music listening Pros CD



Year-end top ten hot jobs to see what people are missing?



Life difficult for PC industry in 2009, will welcome the FIRST decline since 2001



YouTube to WMV



How to convert protected wma to mp3 for ipod



Catalogs Religion



Niuhong Hong: The Explosion Of Crack SONY Battery Fans



Tools And Editors Specialist



Picked Games Board



Tuesday, October 5, 2010

CMMI: tailored, is the fundamental



Respondents: quality assurance manager of Hunan Branch Chong Liu sub Mbayu
Interviewer: China's Feng Shan Software Network



Quality assurance manager of Hunan Branch Chong Liu sub Mbayu

July 9, 2007, Hunan Branch Innovation Information Technology Co., Ltd. (former long-Chako create System Integration Company) Director, passed the SEI appraisers Dr. Edward Wang chaired CMMI3 level SCAMPI Class A formal assessment. The company is upgrading from the 1.2 version of the CMMI model since the first domestic release by the model CMMI3 assessment software company.

Reporter (hereinafter referred to as "mind"): Today, the honor to interview from Hunan Changsha, Hunan Branch Information Technology Co., Ltd. The quality assurance record manager Liu sub Mbayu, first of all, please give us talk about the history of the company to implement CMMI.

Liu sub Mbayu (hereinafter referred to as "Liu"): Hunan Section creative IT industry in Hunan, a well-known enterprises, the business covers system integration, intelligent building, industrial vision, and software development, software development accounts for most of country. Since the end of 2003, Hunan Branch started with CMMI model innovation ideas and concepts, the process of building a system of software projects, especially building the independent quality assurance system, and accumulated rich experience in process improvement and process data. July 9 this year, a formal assessment by CMMI3, get CMMI3 certificate.

Process is so simple, but from the experience gained in the certification process, it is difficult to use a few words clearly.

With the domestic software technology continues to evolve, the software has grown in scale of production, customer requirements have become more sophisticated software, the software become increasingly complex, increasingly competitive market, so the software will become increasingly demanding business high. Scale production, improve software quality and reduce software development costs, as product on schedule, is an urgent need for software companies to solve problems.

In such a background, Section creative choice in 2001 ISO9001: 2000 standard, built the quality management system, after several years of operation and continuous improvement, characteristics of the formation of a Section record in business management and project development in played a key role. We also problems in the software process analysis, on the various management concepts and methods of analysis, the end of 2003 on the use of SEI's CMMI model for philosophy, combined with the company many years of accumulation, the software development process was modified, in particular, is to build an independent quality assurance system and process R & D projects and product inspection and audit. After three years of operation, the establishment of a management system for the enterprises themselves, to determine their own management philosophy, developed his own unique corporate culture, which, using a variety of tools to manage the process of software system development process and project record, accumulated rich process data. In order to expand the software market, software product development to further improve efficiency, reduce development costs, improve risk control capability, enhance market competitiveness, the company decided to conduct a formal CMMI3 Accreditation. We ask the Beijing Aobo Ocean Consulting do for us, process improvement and evaluation of certified consulting, well prepared, in the July 9, 2007 adopted by the director of SEI appraisers Dr. Edward Wang chaired CMMI3 level formal SCAMPI Class A assessment, and get Dr. Edward Wang CMMI3 certificate issued on the spot.

CMMI assessment process by the company to improve end of a stage, it is our business beyond the starting point for further continuous improvement.

Note: by the CMM certification, what it means for your company? Its practical value for your company mainly in the where?

Liu: First of all, is the management concept is further deepened. CMMI from the United States, the United States has a strong cultural identity, the essence of which we should be gradually absorbed, in order to improve and deepen our management philosophy to improve the management level.

Second, the standard process documentation, project records of the process. This is consistent with the principles of ISO9000, the. Our business, asked the members of software projects should be required daily log registration, evaluation and testing of defects defects need to be recorded, making the project process can be transparent and controllable.

Third, the quantitative monitoring of the project process. Software project quality, schedule and cost control, from qualitative to quantitative a qualitative leap. Quantitative characteristics reflect monitoring is CMMI. Only quantified, objective comparison between the various project evaluation and assessment possible. We used to process data accumulated through the analysis of a business process measurement baseline.

Fourth, the institutionalization of internal process improvement. Process improvement can not be done overnight, but is not guaranteed, but must be with the development of enterprises, with the requirements of the outside world, as the project changes in the practice, and constantly be improved. This process will be permanent eternal. Our business will continue to improve as a specific project management and planning, from the system and the operational level to ensure improved sustainability.

Of course, the role of management is implicit, not all immediate. In the implementation of CMMI, the company's management at the beginning of the process would improve the expectations are high, I hope a significant effect within a short time, in fact very real. Is not effective, how effective, must use the data to speak, too few data samples within the short term, the effect is difficult to distinguish, even a partial will have not increased but decreased in the case. After so many years of accumulated data show that improvements were effective, especially in the general promotion of the standard process, the assessment of efficiency, and product development cycle to further control and risk control will be enhanced greatly, greatly improving customer satisfaction, and further reduce development costs the. I believe that the standard process through constant practice and improvement, process the data accumulate, our software products necessary to further improve productivity, product quality control and meet user needs further research and development to further reduce costs, and further enhance the market competitiveness of enterprises.

In addition, the current highly competitive software market, most software companies required high R & D capabilities, many software projects, especially in international projects, software companies with CMMI qualification requirements, which will undoubtedly raise the threshold of a software project competition. Our company passed the CMMI assessment, is very conducive to market competition is conducive to participation in international software outsourcing race.

Reporter: How does your company solve the CMM-length problem?

Liu: any new changes are meant to change the will and interests of the re-adjustment of force of habit. CMMI model used to transform the software process is the same need to create a new process is different from past norms, inter-face, ideas change, means that changes to certain habits, certain control flow changes in the original, but also means the thinking and work to change the way we can imagine, from management to general staff, there will be a different level of resistance, a lack of cooperation of the possible existence of a natural resistance to the implementation, which is necessary to face must be resolved.

From management, on one hand need to give yourself time to recharge, and consciously accept the CMMI concept, and constantly update their own knowledge, from the ideological to solve the problem, the most critical. Only with the support of management's conscious, fully in line with any change there will be a success. On the other hand, in the face of challenge or conflict with staff, managers should have a firm attitude towards the implementation of CMMI are determined, should not the slightest hesitation and hesitation, otherwise, it is easy to give up halfway, it is easy just a form. Have input, we will be rewarding. Also proved that the standard process based on CMMI model system did help the company improve the quality of software development, help reduce development costs, help to improve the quality of software products, which allow customers to our services more positive.

From the staff point of view, I think more important is the need to strengthen execution. Large-scale enterprises through training, to strengthen the implementation of staff awareness of staff familiarity with the standard process, in particular the process of establishing clear recommendations for improvement channel, so that employees have ownership, can directly participate in the work process improvement, can be easily put forward their suggestions for improvement. Ideological interpretation of the CMMI development processes to the specific process, the distribution of the daily norm, so employees do not care about the CMMI's profound theoretical knowledge, simply do not care enterprises are ISO standards or any CMMI model, or other management system standards, as long as the documents by the company to do on the trip. Employees in large part solved the doubts and misgivings.

From the company's top decision makers view the need to implement CMMI management philosophy, to CMMI culture into the company's culture. Corporate culture is the highest level of business operations, with good corporate culture, company policies and systems can really launch and implement. In this process, we have a good core idea of the CMM into a company's core philosophy and culture.

Would also like to emphasize that adhere to a gradual approach. Anyone involved in the process, processes, methods and tools of the larger changes should persist after the first pilot to promote the idea of the problem found in the pilot, the pilot in the improvement of a new process, new processes, new methods and become familiar with new tools This promotion is relatively easy.

Some management software companies, including some developers often complain about or that management will be strictly tied to their innovation, they think that to promote CMMI in a step by step, what activities are done according to plans and standard procedures to do, on corporate culture of innovation will play a negative role. I have met the developers, the large number of people who hold this view.

I would like to form this view for two reasons: First, when the enterprise in the implementation of CMMI, too mechanical, not on reality, not closely integrated with the practice, doing the form, for authentication and certification, dampened the enthusiasm of developers. For example, the analysis and design phase, need to develop the ability to play a larger element of creativity, if only from the records of the template's Geshi unity to requirements, including font size, indentation inspection and control, ignoring the content of the assessment, must resentment caused by the developer, but this is not a unified template that does not matter. If equipped with specialized information developer, specializing in the development of documentation, analysis and design staff can work from a standardized document freed the needs of specific creative analysis and software design, and information for professional developers to do the work of the document standardization, we will improve the efficiency of the document will be higher quality. Second, such persons lack of real software engineering experience and management experience, too few large projects to do, too little experience of failure, or failure is not conducted an objective analysis of the causes. On this point is no dispute, CMMI is a software engineering experience and a master of lessons learned, we do not go to repeat those failures.

Reporter: Finally, for those who are ready to embark on the road to CMMI business, you have nothing to say to them?

Liu: First of all, uphold the ingenious, we take the initiative. Mechanically in the implementation of the provisions of CMMI CMMI often mistakes. In the domestic software enterprises, many organizations from the workshop-style software development was a gradual, though to varying degrees set up their own software development process, but there are some limitations or shortcomings. In the enterprise, really more than 10 years experience in software engineering staff is relatively small, but very often we do not want to manage. Business groups in the formation of EPG should be fully considered and the development experience of software engineering experience. If the EPG members have a lack of engineering experience, there is no real practical experience of experts guide the understanding of the CMMI can not be very deep, easy to mechanically CMMI standard provisions or other business processes to CMMI best doctrine of good practice, not to cut and change them, in fact, this is precisely contrary to the spirit of CMMI.

CMMI is a master of software engineering experience is summed up from practice to practice guidance, CMMI is also an updated version of itself and constantly improve. Every business has its own characteristics, like Microsoft's MSF, it is the management process of Microsoft's own internal standards, Microsoft's product development experience is a summary of some of the content of the CMMI in the Mei You, Wan Quan Guo Lai can learn to use, so long as you can improve enterprise management level of their own software, they should be bold attempt.

When the implementation of CMMI, it encountered resistance, in part because copying provisions CMMI not meet the real business, no specific analysis of specific situations. In fact, the first-line managers, developers best understanding of the actual. Who understand the practical, who has a say. Therefore, in order to develop CMMI standard, especially when we want to perform in the development process, standard operating procedures and guidelines and record template, we must first seek the views of actors, fully refined, and only on the basis of consensus to promote.

In addition to modified and not revolutionary. A revolutionary way to implement the CMMI, hope that, through a campaign to address the issue of process capability, one may not understand CMMI, do not know the management of improvement is gradual, a possible the knowledge, expectations in the short term through the CMMI assessment , pure pursuit of market buzz. Although some enterprises in a short time by a high-level CMMI assessment, I'm afraid the absence of effective and not difficult to adhere to our identity, had to give up halfway, it has a lot of examples. After the introduction of an enterprise CMMI will greatly affect the corporate culture, change our thinking and ways of doing things. It has been vividly likened the process to improve weight loss, you can rely on weight loss surgery or starving to lose weight in a short time, but if you do not fundamentally improve diet, lifestyle, exercise habits, then weight will quickly return to the status quo, or even more fat. I think this analogy is very vivid.

Finally, the need to adhere to CMMI several features of culture, will ultimately succeed. First, the process of implementing the ideas, adhere to the process input and output documents, and adhering to input and output of the verification and validation. Second, establish a hierarchical management thinking. Separation of powers is the essence of America's institutional culture, CMMI is also true, naturally has its unique advantages, projects need to adhere to project managers, quality assurance personnel and process monitoring of project staff independence, while local costs will inevitably lead to increased, but ensure that the project can be controlled from the fundamentally visual, the benefits will far outweigh their investment. Third, persist in using the data to speak, that is, adhere to process and product measurement data. The level of production efficiency, product quality is good or bad, project schedule delays, the demand level of quality and risk control, qualitative description is often feeble, unconvincing, and only with objective data, quantitative comparison of only the most convincing force, which is CMMI level into the foundation and protection.

Wish all companies interested in adopting the CMMI model as soon as possible to benefit from CMMI, arrive early like.







Recommended links:



SWF to MP4



AWS Express, turned out to Yanhuang PCC writing a new chapter in BPM



Who In The Sale Of National Sentiment!



Distributed POWERBUILDER works



Toshiba To Pay For Dedicated HD-DVD Format, Paramount Has Been Criticized



PPTV sued the PPS legislation grievances to experts hope



Report Hobby



To expose the fraud: WinRAR to compress 775MB 13.4MB



Usa Zip Code Map Services



3GP to FLV



"Cockroaches door", HP will lose?



Lists Mathematics Education



WMV To Zune