MindMap Gallery Systems Integration Project Management Engineer 3rd EditionChapter 4 Information Systems Architecture
System Integration Project Management Engineer 3rd Edition/Chapter 4 Information system architecture, information system architecture refers to the basic concepts or characteristics that reflect the components, relationships, and system design and evolution principles related to information systems.
Edited at 2024-03-17 11:39:10This is a mind map about bacteria, and its main contents include: overview, morphology, types, structure, reproduction, distribution, application, and expansion. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about plant asexual reproduction, and its main contents include: concept, spore reproduction, vegetative reproduction, tissue culture, and buds. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about the reproductive development of animals, and its main contents include: insects, frogs, birds, sexual reproduction, and asexual reproduction. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about bacteria, and its main contents include: overview, morphology, types, structure, reproduction, distribution, application, and expansion. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about plant asexual reproduction, and its main contents include: concept, spore reproduction, vegetative reproduction, tissue culture, and buds. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about the reproductive development of animals, and its main contents include: insects, frogs, birds, sexual reproduction, and asexual reproduction. The summary is comprehensive and meticulous, suitable as review materials.
information system architecture
一、 Architecture basics
I. summary
The Institute of Electrical and Electronics Engineers (IEEE) believes that system architecture is the basic organizational structure that constitutes a system, including the component composition of the system, the relationship between components, the relationship between the system and its environment, and guidance. Guidelines for architectural design and evolution. If the system category includes the entire organization's systems, the architecture defines the direction, structure, relationships, principles, and standards of the organization's information system architecture.
Information system architecture refers to the basic concepts or characteristics that embody information system-related components, relationships, and system design and evolution principles.
The architectures involved in information system integration projects usually include system architecture, data architecture, technology architecture, application architecture, network architecture, security architecture, etc. The organizational-level information system integration architecture carries the organization's development strategy and business architecture upwards and guides downwards. With the implementation of specific information system plans, it plays a backbone role in connecting the previous and the next.
This hierarchical structure needs to be determined based on the organization's strategic goals, operating model and informatization level, and closely supports the realization of business value.
The essence of architecture is decision-making, which is made after weighing various factors such as direction, structure, relationships, and principles. Information system projects can carry out the design of various architectures based on the guiding ideology, design principles and construction goals of project construction.
II. guiding ideology
The guiding ideology is the overall principles, requirements and guidelines that must be followed to carry out a certain work. It guides and guides the progress of the work from a macro perspective and an overall high level. Through the implementation of the guiding ideology, it promotes the multi-participants of the project to maintain the key points of integration. Consistent understanding of values, thereby reducing unnecessary contradictions and conflicts.
For example: The guiding ideology for the construction of a social insurance smart governance center in a certain city is defined as: guided by Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era, fully implementing the spirit of the 20th National Congress of the Communist Party of China, adhering to the people-centered development idea, and adhering to everything for The people, relying on the people in everything, always put the people at the highest position in our hearts, take the people's yearning for a better life as the goal of our efforts, adapt to the needs of the reform and development of social insurance in the new era, focus on important areas and key links of social insurance work, coordinate planning, Driven by innovation and empowered by data, comprehensively carry out the construction of a city's smart human and social governance center, promote the smart social insurance system, innovation system and capacity building in the new era, continuously improve social insurance governance capabilities and service levels, and provide high-quality social insurance undertakings in the new era. Development provides strong information support and promotes the modernization of a certain city's governance system and governance capabilities.
III. Design Principles
Design principles provide a solid foundation for architectural and planning decisions, the development of policies, procedures and standards, and the resolution of conflicting situations.
Principles do not need to be numerous, they must be future-oriented and must be recognized, supported and adhered to by senior managers of relevant parties. Too many principles reduce the flexibility of the architecture, and many organizations tend to define only higher-level principles, often limiting the number to 4 to 10.
The design principles for the construction of a city’s social insurance smart governance center include:
1. Adhere to people-oriented
Adhere to the people-centered development idea, closely follow the people's service needs and service experience, and take the people's satisfaction, dissatisfaction, dissatisfaction, and dissatisfaction as the work goals. Through the construction of a social insurance smart governance center in a certain city, we support the construction of a mass A satisfactory social insurance public service system in a certain city.
2. Adhere to innovation leadership
Comprehensive use of mainstream technologies such as the Internet, big data, intelligence, Internet of Things, 5G, AI, and GIS, and driven by mechanism reform, model innovation, data drive, and technology empowerment, build a social insurance smart governance center and promote the modernization of social insurance in a certain city Modernization of governance systems and governance capabilities.
3. Adhere to problem orientation
We will focus on solving the key, difficult, and pain points that restrict the development of social insurance in a certain city as the focus of building a smart social insurance governance center in a certain city, identify breakthroughs, enhance pertinence, highlight the overall situation, and improve service standardization, specialization, and collaboration The level of management and management should be intelligent, precise and scientific.
4. Adhere to overall coordination
The construction of a city's social insurance smart governance center must focus on the overall work of a city's social insurance system, starting from multiple dimensions of system connection, policy support, department linkage, business collaboration, and data sharing, to create business and technology, internal and external, and horizontal A new smart governance system for social insurance that integrates vertically, online and offline has formed a new driving force to support the high-quality development of social insurance in the new era.
5. Adhere to safety and controllability
The construction of a social insurance smart governance center in a certain city must correctly handle the relationship between innovative development and security, strengthen information security and personal privacy protection, improve the multi-level social insurance risk prevention and control system, and consolidate reliable, available, and sustainable information support ability.
6. Adhere to scientific implementation
According to the overall planning and construction plan of a certain city's social insurance smart center, clarify the boundaries, relationships and priorities between the construction of the social insurance smart governance center and the overall construction of the financial insurance project, make full use of the existing information infrastructure and application systems, coordinate planning, and carefully Implementation should focus on being implementable, operable and assessable to ensure that the effectiveness of the construction of a social insurance smart governance center in a certain city can be fully released.
IV. The goal of building
The construction goal refers to the ultimate goal of integrated construction, what effect is achieved, and why it is served. It is a conceptual policy. Usually, the ideas and visions proposed by senior leaders of relevant parties are the construction goals.
The construction goal of a city's social insurance smart governance center is defined as: based on the functional mission and development direction of the social insurance industry in the new era, in accordance with the reform requirements of "delegation, regulation and service" and in accordance with the new public management theory, comprehensive use of the Internet, big data, intelligence, Modern thinking and mainstream technologies such as the Internet of Things, 5G, AI, and GIS focus on business governance, comprehensive governance, and big data governance. By a certain year, a city's social insurance smart governance center that is pan-connected, open, integrated, linked, intelligent, online, visible, and secure will be initially built to comprehensively improve the service capabilities, intelligent supervision capabilities, and risk management of a city's social insurance system. Prevention and control capabilities, decision-making analysis capabilities, and global linkage capabilities promote the construction of the country's leading social insurance intelligent governance system, intelligent risk control system, intelligent connected business system, and intelligent mass benefit system, establish a new benchmark for the urban governance industry, and create a new national paradigm for social insurance governance , provide new momentum to promote the high-quality development of social insurance in a certain city in the new era, and help improve the scientific, refined and intelligent level of a certain city's governance.
V. overall framework
A framework is a conceptual structure used to plan, develop, implement, manage, and maintain an architecture. The framework is critical to architecture design. The framework reasonably separates the attention of the organization's business content, and uses roles as the starting point to display the content of the organization's business from different perspectives. The framework provides a roadmap for architectural design, guiding and helping architectural design to achieve the goal of building an advanced, efficient and applicable architecture.
The overall reference framework for information system architecture consists of four parts:
1. strategic system
Strategy system refers to the management activities and computer-aided systems related to strategy formulation and high-level decision-making in an organization.
In Information System Architecture (ISA), the strategic system consists of two parts
1||| One is a high-level decision support system based on information technology
2||| The second is the strategic planning system of the organization
Establishing a strategic system in ISA has two meanings:
1||| First, it represents the decision-making support capability of the information system to the organization's top managers;
2||| Second, it represents the impact and requirements of organizational strategic planning on information system construction.
Usually organizational strategic planning is divided into two types: long-term planning and short-term planning. Long-term planning is relatively stable, such as adjusting product structure, etc.; short-term planning is generally formulated based on the purpose of long-term planning, and is relatively easy to adapt to the environment and organizational operations. And changes, such as deciding on the type of new product, etc.
2. business system
Business system refers to the system composed of various parts (material, energy, information and people) in the organization that complete certain business functions.
There are many business systems in an organization, such as production systems, sales systems, purchasing systems, personnel systems, accounting systems, etc. Each business system consists of some business processes to complete the functions of the business system. For example, accounting systems often include accounts payable, accounts payable, and accounting systems. Account collection, invoicing, auditing and other business processes.
Business processes can be decomposed into a series of logically interdependent business activities. Business activities are completed in sequence. Each business activity has a role to perform and processes related data. When organizations adjust their development strategies to better adapt to internal and external development environments (such as deploying and using information systems), they often carry out business process reorganization. Business process reorganization is centered on business processes, breaking the division of labor between functional departments of the organization, and improving or reorganizing existing business processes in order to achieve significant improvements in production efficiency, cost, quality, delivery time, etc., and improve the organization competitiveness.
The role of the business system in ISA is:
Model the organization's existing business systems, business processes and business activities, and under the guidance of the organization's strategy, use the principles and methods of Business Process Reengineering (BPR) to optimize and reorganize the business process, and perform the restructured Business areas, business processes and business activities are modeled to determine relatively stable data. Based on this relatively stable data, the development of organizational application systems and the construction of information infrastructure are carried out.
3. operating system
Application system is the application software system, which refers to the application software part of the information system.
For application software (application systems) in organizational information systems, generally the completed functions can include:
(1) Transaction Processing System (TPS)
(2) Management Information System (MIS)
1||| Sales management subsystem
2||| Procurement management subsystem
3||| Inventory management subsystem
4||| transportation management subsystem
5||| Financial management subsystem
6||| Personnel management subsystem, etc.
(3) Decision Support System (DSS)
(4) Expert System (ES)
(5) Office Automation System (OAS)
(6) Computer-aided design/computer-aided process design/computer-aided manufacturing, Manufacturing Execution System (MES), etc.
No matter which level the application system is at, from an architectural perspective, it contains two basic components: the internal function implementation part and the external interface part. These two basic parts consist of more specific components and the relationships between them. The interface part is the part of the application system that changes relatively frequently, mainly caused by changes in user requirements for interface form. In the function implementation part, relatively speaking, the data processed change less, while the algorithm and control structure of the program change more, mainly caused by changes in the user's functional requirements for the application system and changes in interface form requirements.
4. information infrastructure
Organizational information infrastructure refers to the construction of an environment consisting of information equipment, communication networks, databases, system software and supporting software based on the organization's current business and foreseeable development trends and requirements for information collection, processing, storage and circulation.
Organizational information infrastructure is divided into three parts:
1||| technical infrastructure
It consists of computer equipment, network, system software, supporting software, data exchange protocols, etc.
2||| information resource facilities
It consists of data and information itself, data exchange forms and standards, information processing methods, etc.
3||| Management infrastructure
It refers to the organizational structure of the information system department in the organization, the division of labor among information resource facility managers, the management methods and rules and regulations of the organization's information infrastructure, etc.
Due to the development of technology and changes in organizational system requirements, technical infrastructure faces many changing factors in the design, development and maintenance of information systems, and due to the diversity of implementation technologies, there are multiple ways to achieve the same function. Information resource facilities have relatively little change in system construction. No matter what functions the organization completes or how business processes change, data and information must be processed, and most of them do not change with business changes. There are relatively many changes in management infrastructure. This is because organizations need to adapt to changes in the environment and meet the needs of competition. Especially in the stage of my country's transformation and upgrading to a market economy, the introduction or changes of economic policies, business model reforms, etc. will have a great impact on This leads to changes in organizational rules and regulations, management methods, personnel division of labor, and organizational structure. The above is just an overall description of the relative stability and relative change of the three basic components of information infrastructure. There are relatively stable parts and relatively volatile parts in technical infrastructure, information resource facilities, and management infrastructure. It cannot be generalized.
5. The strategic system is at the first level, and its functions are similar to those at the strategic management level. On the one hand, it puts forward innovation, reconstruction and reengineering requirements for business systems, and on the other hand, it puts forward integration requirements for application systems. The business system and the application system are on the second layer and belong to the tactical management layer. The business system manages and controls the organization through the optimization of business processing processes, and the application system provides the means to effectively utilize information and data for this control. , and improve the operational efficiency of the organization. The information infrastructure is at the third layer and is the basic part for the organization to realize informatization and digitization. It is equivalent to the operation management layer. It provides computing, transmission, data and other support for application systems and strategic systems. At the same time, it also provides an effective, flexible and responsive technical and management support platform for the organization's business system reorganization.
二、 system structure
I. Architecture definition
i. Common definitions mainly include:
①The information system architecture of a software or computer system is one (or more) structures of the system, and the structure consists of software elements, externally visible attributes of the elements, and the relationships between them.
②The information system architecture provides a high-level abstraction of the structure, behavior, and properties of the software system, consisting of a description of the elements that constitute the system, the interaction of these elements, the patterns that guide the integration of elements, and the constraints of these patterns.
③Information system architecture refers to the basic organization of a system, which is embodied in the components of the system, the relationship between components, and the relationship between components and the environment, as well as the principles that guide its design and evolution.
The first two definitions are described according to the abstract level of "element-structure-architecture", and their basic meanings are the same. "Software element" in this definition refers to a more general abstraction than "component", and the "externally visible properties" of an element refer to the assumptions made by other elements about the element, such as the services it provides, performance characteristics, etc.
ii. It can be understood from the following 6 aspects:
1. Architecture is an abstraction of a system that reflects this abstraction by describing elements, their externally visible properties, and the relationships between elements. Therefore, details related only to the internal concrete implementation are not part of the architecture, i.e. the definition emphasizes the "externally visible" properties of the element.
2. The architecture is composed of multiple structures. The structure describes the relationship between elements from a functional perspective. The specific structure conveys information about certain aspects of the architecture, but individual structures generally cannot represent large-scale information system architecture.
3. Any software has an architecture, but there may not necessarily be a specific document describing the architecture. That is, the architecture can exist independently of the description of the architecture. If a document is out of date, it does not reflect the schema.
4. The collection of elements and their behaviors constitutes the content of the architecture. What elements does the system consist of, what functions do these elements have (visible externally), and how do these elements connect and interact with each other? That is, abstraction is performed in two aspects: in the static aspect, focusing on the large-grained (macro) overall structure of the system (such as layering); in the dynamic aspect, focusing on the common characteristics of key behaviors within the system.
5. Architecture is "fundamental": it usually involves common solutions to various key repetitive problems (reusability), as well as important decisions with far-reaching consequences (architecture-sensitive) in system design (once implemented, changes are expensive).
6. Architecture implies "decision-making", that is, the architecture is the result of design and decision-making by architects based on key functional and non-functional requirements (quality attributes and project-related constraints).
iii. Information system architecture is very important to organizations, mainly reflected in:
① Factors affecting architecture.
The project stakeholders of the software system (customers, users, project managers, programmers, testers, marketers, etc.) have different requirements for the software system, the development organization (project team) has different personnel knowledge structures, and the quality of the architecture designers Aspects such as experience and the current technical environment are all factors that affect the architecture. These factors affect the architecture by influencing the decisions of the architect through functional requirements, non-functional requirements, constraints, and conflicting requirements.
② Architecture has a counterproductive effect on the above factors, for example, affecting the structure of the development organization.
The architecture describes the large-grained (macro) overall structure of the system, so labor can be divided according to the architecture, and the project group is divided into several working groups, thereby making development orderly; affecting the goals of the development organization, that is, a successful architecture provides the development organization with New business opportunities have been created, thanks to the demonstrability of the system, the reusability of the architecture, and the improvement of the team’s development experience. At the same time, a successful system will influence customers’ requirements for the next system.
II. Architecture classification
i. Classification
1. physical architecture
Physical architecture refers to not considering the actual work and functional architecture of each part of the system, but only abstractly examining the spatial distribution of its hardware system.
According to the topological relationship of information systems in space, they are generally divided into
(1) Centralized architecture
Centralized architecture refers to the centralized allocation of physical resources in space.
The early stand-alone system was the most typical centralized architecture, which concentrated software, data and major external devices into a computer system. A multi-user system composed of multiple users distributed in different locations sharing resources through terminals is also a centralized architecture. The advantages of a centralized architecture are centralized resources, easy management, and high resource utilization. However, as the system scale expands and the system becomes increasingly complex, it becomes increasingly difficult to maintain and manage the centralized architecture, which is often not conducive to mobilizing users' enthusiasm, initiative and sense of participation in the information system construction process. In addition, too much concentration of resources will make the system fragile. Once the core resources are abnormal, it is easy to paralyze the entire system.
(2) Distributed architecture
Distributed systems refer to connecting computer hardware, software, data and other resources in different locations through computer networks to achieve resource sharing in different locations.
The main features of distributed architecture are: resources can be configured according to application requirements, improving the information system's adaptability to user needs and changes in the external environment. The system is easy to expand and has good security. A failure in a certain node will not affect the entire system. Stop functioning. However, because resources are dispersed and belong to various subsystems, system management standards are difficult to unify and coordination is difficult, which is not conducive to the planning and management of the entire resource.
Distributed architecture can be divided into
1||| General distributed systems
The server only provides software, computing and data services, and each computer system accesses data and program files on the server according to specified permissions.
2||| client/server architecture
Divided into two categories: client and server. Servers include file servers, database servers, print servers, etc.; other computer systems on network nodes are called clients. The user makes a service request to the server through the client, and the server provides processed information to the user according to the request.
2. logical architecture
Logical architecture refers to the synthesis of various functional subsystems of an information system.
The logical architecture of an information system is its functional complex and conceptual framework.
For the management information system of a production organization, it is divided from the perspective of management functions, including information management subsystems with major functions such as procurement, production, sales, human resources, and finance. A complete information system supports various functional subsystems of an organization, allowing each subsystem to complete functions at various levels such as transaction processing, operation management, management control, and strategic planning. Each subsystem can have its own dedicated files, and at the same time can share various types of data in the information system, and realize the connection between subsystems through standardized interfaces such as network and data. Similarly, each subsystem has its own application program, and can also call public programs that serve various functions and models in the system model library.
ii. System integration
1. horizontal integration
Horizontal integration refers to the integration of various functions and needs at the same level. For example, integrating the personnel and payroll subsystems of the operation control layer to integrate grass-roots business processing.
2. vertical integration
Vertical integration refers to organizing businesses at all levels with certain functions and requirements. This integration communicates the connections between superiors and subordinates. For example, the accounting system of an organization branch and the accounting system of the overall organization are integrated. They all have something in common. where an integrated processing process can be formed.
3. Vertical and horizontal integration
Vertical and horizontal integration refers to synthesis mainly from the two aspects of information model and processing model to achieve centralized sharing of information, make the program as modular as possible, pay attention to extracting common parts, and establish a system common data system and integrated information processing system.
III. General principles
Information system architecture refers to the establishment of a multi-dimensional, hierarchical, integrated and open multi-dimensional, hierarchical, and integrated open system based on comprehensive consideration of the organization's strategy, business, organization, management, and technology, focusing on the components and relationships between the components of the organization's information system. It adopts a systematic architecture and provides organizations with a certain degree of flexibility in information systems and flexible and effective implementation methods.
Architecture consists of two basic parts: components and relationships between components.
In the information system, relatively stable components and relationships are extracted, and with the support of the relatively stable parts, the relatively changing parts are reorganized to meet the changing requirements, so that the information system can respond to changes in the environment. A certain degree of adaptability, that is, a certain degree of flexibility, is the basic principle of information system architecture.
IV. Common architectural models
i. Stand-alone application mode
Standalone system is the simplest software structure, which refers to running on a physical machine. standalone application. Of course, the application can be multi-process or multi-threaded.
A large software system must have many subsystems integrated and executed on a graphical interface, and can run on multiple platforms, such as Linux, UNIX, Windows, etc. Products in the professional field, such as CATIA, Pro/Engineer, AutoCAD in the field of computer-aided design, as well as Photoshop, CorelDRAW, etc. that are familiar to everyone in the field of image processing and editing.
A more important application field of software architecture design is the field of information systems, that is, software systems with data processing (data storage, transmission, security, query, display, etc.) as the core.
ii. client/server mode
Client/Server model is the most common one in information systems. It can be understood as the "send" and "reflection" program structure of inter-process communication IPC programming based on the TCP/IP protocol, that is, the client sends a TCP or UDP packet to the server, and then the server sends a TCP packet back to the client based on the received request. or UDP packet
Four common architectures
1. Two-layer C/S
Two-layer C/S is essentially the application system embodiment of the IPC client/server structure. That is the "fat client" mode. In actual system design, this type of structure mainly refers to the front-end client and the back-end database management system.
The front-end interface and back-end database service model are the most typical. Database front-end development tools (such as PowerBuilder, Delphi, VB), etc. are all software tools used to specifically create this structure.
2. Three-layer C/S and B/S structure
In addition to database access operations, the front-end interface sends requests to the back-end, and there are many other business logics that need to be processed. The front-end interface of the three-layer C/S and the back-end service must communicate (including requests, replies, remote function calls, etc.) through a protocol (self-developed or using a standard protocol)
Usually include the following:
1||| Based on the TCP/IP protocol, it is developed directly on the basis of the underlying Socket API. This is generally only suitable for small systems with simple requirements and functions.
2||| First, a custom message mechanism is established (encapsulating TCP/IP and Socket programming), and then the communication between the foreground and the background is realized through this message mechanism. The message mechanism can be based on XML or byte stream (Stream) definition. Although it is custom communication, it can build large-scale distributed systems based on it.
3||| Based on RPC programming.
4||| Based on CORBA/IOP protocol.
5||| Based on Java RMI.
6||| Based on J2EE JMS.
7||| Based on HTTP protocol, such as information exchange between browser and web server. What needs to be pointed out here is that HTTP is not an object-oriented structure. Object-oriented application data will be first flattened and then transmitted.
The most typical application model based on the three-tier C/S structure is the B/S (Browser/Server, browser/server) model.
A web browser is a client application used for document retrieval and display, and is connected to the web server through the Hypertext Transfer Protocol HTTP (Hyper Text Transfer Protocol). In this mode, the universal, low-cost browser saves the development and maintenance costs of the two-layer structure C/S mode client software. These browsers are familiar to everyone, including MSInternet Explorer, Mozilla FireFox, etc.
A web server refers to a program that resides on a certain type of computer on the Internet. When a web browser (client) connects to the server and requests a file or data, the server will process the request and send the file or data to the browser, and the accompanying information will tell the browser how to view the file (i.e. file type). The server uses HTTP for information exchange and can be called an HTTP server.
People perform various operations on the Web browser every day. Most of these operations are actually performed on the Web server. The Web browser just sends people's requests to the Web server in the HTTP protocol format or returns the The query results are displayed only. Of course, the hardware devices hosting the Web browser and the server can be two computers located thousands of miles apart on the Web network.
It should be emphasized that the communication between the B/S mode browser and the Web server is still TCP/IP, but the protocol format is standardized at the application layer. In fact, B/S is a three-layer C/S structure that adopts a universal client interface.
3. Multi-layer C/S structure
Multi-layer C/S structure generally refers to a structure with more than three layers. In practice, it is mainly four layers, namely front-end interface (such as browser), Web server, middleware (or application server) and database server.
The middleware layer mainly completes the following aspects of work:
1||| Improve system scalability and increase concurrency performance.
2||| The middleware/application layer specializes in request forwarding or some processing related to application logic. Middleware with this function can generally be used as a request proxy or as an application server. This role of middleware is commonly used in the multi-layer structure of J2EE. For example, the EJB containers provided by BEA WebLogic, IBM WebSphere, etc. are components of middleware technology specifically designed to handle complex organizational logic.
3||| Increase data security.
4. Model-View-Controller Pattern
Model-View-Controller (MVC) is actually a commonly used standardized pattern of the above-mentioned multi-layer C/S structure.
In the J2EE architecture, the View presentation layer refers to the browser layer, which is used to graphically display the request results; the Controller refers to the Web server layer, and the Model model layer refers to the application logic implementation and data persistence part. Currently popular J2EE development frameworks, such as JSF, Struts, Spring, Hibernate, etc., and their combinations, such as Struts Spring Hibernate (SSH), JSP Spring Hibernate, etc., are all oriented to the MVC architecture. In addition, languages such as PHP, Perl, and MFC all have MVC implementation patterns.
MVC mainly requires that the code of the presentation layer (view) and data layer (model) be separated, and the controller can be used for connection Different models and views to fulfill user needs. From the perspective of a layered system, MVC's hierarchical structure, its controllers and views are usually at the Web server level. Depending on whether the "model" separates business logic processing into separate service processing, MVC can be divided into three layers or Four-tier system.
iii. Service Oriented Architecture (SOA) Pattern
1. service-oriented architecture
If two multi-layer C/S structure application systems need to communicate with each other, then a service-oriented architecture will be produced. (Service Oriented Architecture,SOA)
An independent application system means that no matter how many layers of services the application system consists of, if any layer is removed, it will not work properly. It can be an independent application that provides complete functions to the outside world.
Use middleware to realize SOA requirements, such as message middleware, transaction middleware, etc.
In practice, service-oriented architecture can be specifically divided into heterogeneous system integration, homogeneous system aggregation, federated architecture, etc.
2. Web Service
The service-oriented architecture is reflected between Web applications and becomes Web Service, that is, two Internet applications can open some internal "services" to each other (such services can be understood as functional modules, functions, processes, etc.). Currently, the protocols used by Web applications to open their internal services to the outside world mainly include SOAP and WSDL. For specific information, please refer to the relevant standards.
Web Service is one of the most typical and popular application patterns of service-oriented architecture.
3. The essence of service-oriented architecture
The essence is the message mechanism or remote procedure call (RPC).
iv. Organizational Data Exchange Bus
That is, a common channel for information exchange between different organizational applications.
This structure is commonly used in information exchange between different application systems in large organizations. In China, this structure is mainly used by organizations with a high degree of informatization and digitization.
Regarding the data bus itself, its essence should be a software system called a connector (Connector). It can be built based on middleware (such as message middleware or transaction middleware), or can be developed based on the CORBA/IOP protocol. Mainly The function is to receive and distribute data, requests or responses according to predefined configurations or message header definitions.
The organization-level data exchange bus can have the functions of real-time transactions and large data volume transmission at the same time. However, in practice, mature enterprise data exchange buses are mainly designed for real-time transactions, and the demand for reliable large data volume transmission often requires Individually designed. If CORBA is used as the communication protocol, the switching bus is the Object Request Broker (ORB), also known as the "Agent system".
V. planning and design
i. Integrated architecture evolution
1. Based on application function as the main line architecture
For small and medium-sized industrial enterprises or industrial enterprises in the initial stage of informatization and digitalization, the main goal of information system integration construction is to improve work efficiency and reduce business risks. At the same time, due to the shortcomings of their own informatization teams and talents, as well as the lack of in-depth understanding of informatization and digitalization in their business systems, enterprises often adopt the "borrowing doctrine" to build their information systems, that is, directly purchasing complete sets of mature application software. , and build relevant infrastructure based on the operational requirements of the application software.
At this stage of enterprise development, the focus is on the detailed division of organizational functions and the introduction of industry best practices. Therefore, the information construction of an organization is often based on departments or functions. The core focus is on the software functions of the information system, such as financial management, equipment management, asset management, etc., so as to carry out information system planning, design, deployment and operation. At the same time, through the deployment of complete sets of software, it can strengthen its own management or process level. The integration and fusion between application software or modules is mainly completed through the system’s software interface. Enterprises often adopt unified planning and step-by-step implementation, that is, what functions are needed and what functions are deployed online
2. Based on platform capabilities as the main line architecture
The system integration architecture with platform capabilities as the main line originated from the development of cloud computing technology and the gradual maturity of cloud services. Its core concept is to transform each component of the "silo-style" information system into a "flattened" construction method, including flattened data collection, flattened network transmission, flattened application middleware, and flattened application development. etc., and through standardized interfaces and new information technologies, the elasticity and agility of information systems can be built. Information system applications supported by platform architecture can be combined with special construction or independent configuration (or small-scale development) to quickly obtain the application system functions required by the enterprise, thus overcoming the shortcomings of complete software vendors in personalized software customization.
In specific practice, the architectural transformation of enterprises is a continuous process. Enterprises will continue to use the complete software deployment model for applications with high maturity and few changes, and adopt platform architecture for new and changing applications, and ultimately maintain the coexistence of the two architectures (also known as dual-state IT, that is, sensitive state and steady state). integration) or all converted to a platform architecture.
3. Internet as mainline architecture
When an enterprise develops to the industrial chain or ecological chain stage, or becomes a complex and diversified group enterprise, the enterprise begins to seek a transfer or transition to a system integration architecture with the Internet as the main line.
A system integration architecture with the Internet as the main line, emphasizing the maximum application (microservices) of each information system function
The system integration architecture with the Internet as the main line integrates and applies more new generation information technologies and their application innovations
4. The adoption of different mainline architectures essentially depends on the degree of business development of the enterprise, which is reflected in the maturity of the enterprise's digital transformation.
ii. TOGAF architecture development method
TOGAF (The Open Group Architecture Framework) is an open enterprise architecture framework standard that provides consistency guarantees for standards, methodologies, and communication among enterprise architecture professionals.
TOGAF basics
TOGAF is developed by the international organization The Open Group.
The organization began to develop system architecture standards in 1993 in response to customer requirements, and published the TOGAF architecture framework in 1995. The foundation of TOGAF is the Technical Architecture For Information Management (TAFIM) of the U.S. Department of Defense. It is based on an iterative process model that supports best practices and a reusable set of existing architectural assets. It can be used to design, evaluate, and establish a suitable enterprise architecture. Internationally, TOGAF has been proven to be able to build enterprise IT architecture flexibly and efficiently.
The framework is designed to help enterprises organize and address all critical business needs through the following four objectives:
1||| Make sure all users, from key stakeholders to team members, speak the same language. This helps everyone understand the framework, content and goals in the same way and gets the entire business on the same page breaking down any communication barriers.
2||| Avoid being “locked” into proprietary solutions for enterprise architecture. As long as the business uses TOGAF internally and not for commercial purposes, the framework is free.
3||| Save time and money and use resources more efficiently.
4||| Achieve significant return on investment (ROI).
TOGAF reflects the structure and content of an enterprise's internal architecture capabilities. The TOGAF9 version includes six components:
1||| Architecture Development Methods
This part is the core of TOGAF. It describes the TOGAF Architecture Development Method (ADM). ADM is a step-by-step method for developing enterprise architecture.
2||| ADM Guidelines and Technologies
This section contains a series of guidelines and techniques that can be used to apply ADM.
3||| Architecture content framework
This section describes the TOGAF content framework, including a structured meta-model of architecture artifacts, the use of reusable Architecture Building Blocks (ABBs), and an overview of typical architecture deliverables.
4||| Enterprise Continuum and Tools
This section discusses taxonomies and tools for classifying and storing the output of architecture activities within an enterprise.
5||| TOGAF reference model
This part provides two architectural reference models, namely the TOGAF Technology Reference Model (TRM) and the Integrated Information Infrastructure Reference Model (II-RM).
6||| Architecture Capability Framework
This section discusses the organization, processes, skills, roles, and responsibilities required to establish and operate an architecture practice within an enterprise.
The core idea of the TOGAF framework is:
1||| Modular architecture
The TOGAF standard adopts a modular structure.
2||| content frame
The TOGAF standard includes a content framework that produces more consistent results following the Architecture Development Method (ADM). The TOGAF content framework provides a detailed model for architecting products.
3||| Extended Guide
The TOGAF standard's set of extended concepts and specifications provide support for internal teams in large organizations to develop multi-tiered integration architectures that operate within an overarching architectural governance model.
4||| architectural style
The TOGAF standard is designed to be flexible and can be used in different architectural styles.
5||| The key to TOGAF is the architecture development method, which is a reliable and effective method that can meet the organizational structure of business needs.
ADM method
The Architecture Development Method (ADM) describes the various steps required to develop an enterprise architecture and the relationships between them. Detailed definition, it is also the core content of the TOGAF specification.
The ADM approach consists of a set of phases arranged in a ring in the order of architecture development in the architecture domain.
This model divides the ADM full life cycle into ten phases including preparation phase, demand management, architectural vision, business architecture, information system architecture (application and data), technical architecture, opportunities and solutions, migration planning, implementation governance, and architecture change governance. Stages, these ten stages are an iterative process.
The ADM approach is applied iteratively throughout the architecture development process, between phases, and within each phase. In the full life cycle of ADM, each stage needs to confirm the design results based on the original business requirements, which also includes some stages unique to the business process. Validation requires a revisiting of the enterprise's coverage, timeframe, level of detail, plans, and milestones. Each phase should allow for the reuse of architectural assets.
ADM has formed a three-level iteration concept:
1||| Iteration based on the overall ADM
Applying the ADM approach in a circular manner indicates that completion of one architectural development phase leads directly to the next subsequent phase.
2||| Iterations across multiple development phases
After completing the development work in the technical architecture phase, it returned to the business architecture development phase.
3||| Iteration within a stage
TOGAF supports iterative development of complex architectural content based on multiple development activities within a stage.
Main activities in each development stage of ADM
VI. value-driven architecture
i. Model overview
The purpose of a system is to create value for its stakeholders.
The core characteristics of the value model:
(1) value expectation
Value expectations represent the demand for a specific feature, including content (function), satisfaction (quality), and practicality at different levels of quality.
For example, a car driver has a value expectation for the speed and safety of the car's sudden braking at a speed of 60 kilometers per hour.
(2) reaction force
In the actual environment of system deployment, it is difficult to achieve a certain value expectation. Usually, the higher the expectation, the greater the difficulty, that is, the reaction force.
For example, the result of emergency braking of a car at a speed of 60 kilometers per hour depends on the type of road surface, the slope of the road surface and the weight of the car.
(3) catalyst for change
A change catalyst represents some event in the environment that causes a change in value expectations or is a constraint that leads to a different outcome.
The counterforces and catalysts for change are called constraints, and these three are collectively called value drivers. If a system is designed to effectively meet the value model requirements of its stakeholders, then it needs to be able to identify and analyze value models.
General approaches, such as use case scenarios and business/marketing requirements, start by focusing on the types of actors that interact with the system. This approach has four outstanding limitations.
(1) Pay more attention to the behavioral models of the participants and less to the goals within them.
(2) Participants are often rigidly divided into several roles, where the individuals in each role are essentially the same (for example, businessmen, investment managers, or system administrators).
(3) Differences in limiting factors are often ignored (e.g., whether securities traders in New York are the same as those in London, whether the market is open for trading versus daily trading).
(4) The result is simple. Requirements are met or not met, use cases are completed successfully or not.
There is a very logical practical reason for this approach, it uses sequential reasoning and categorical logic, so it is easy to teach and explain, and it produces a set of results that is easy to verify.
ii. structural challenges
An architectural challenge occurs because one or more constraints make it more difficult to meet one or more expectations.
In any environment, identifying architectural challenges involves assessment.
(1) Which constraints affect one or more desired values.
(2) If impacts are known, would it be easier (positive impacts) or more difficult (negative impacts) for them to meet expectations.
(3) How severe the various effects are, in which case simple levels of low, medium and high are usually sufficient.
The evaluation must consider the architecture in the context of its own challenges. While it is possible to average utility curves across contexts, the same treatment cannot be applied to the effects of constraints on expected values. For example, assume that the Web server provides pages in two situations: one is accessing static information, such as references, which require a response time of 1 to 3 seconds; the other is accessing dynamic information, such as information about ongoing sports events. The response time of the personal score sheet is 3 to 6 seconds.
Both cases have CPU, memory, disk, and network limitations. However, when the request volume increases by 10 or 100 times, these two cases may encounter very different scalability barriers. For dynamic content, synchronization of updates and access becomes a limiting factor under heavy load. For static content, heavy load can be overcome by caching read pages frequently.
Developing a system's architectural strategy begins with:
(1) Identify and prioritize appropriate value contexts.
(2) Define utility curves and prioritized expected values in each context.
(3) Identify and analyze counterforces and catalysts for change in each context.
(4) Detect areas where constraints make it difficult to meet expectations.
It only makes sense that the earliest architectural decisions produce the greatest value. There are several criteria for prioritizing architectures, suggesting trade-offs such as importance, degree, consequences, and isolation.
(1) importance
How prioritized are the expectations affected by the challenge, and if those expectations are specific to a small number of contexts, what are the relative priorities of those contexts.
(2) degree
How much impact the constraints have on the expected value.
(3) as a result of
Approximately how many options are available, and whether they vary significantly in difficulty or effectiveness.
(4) isolation
What about the isolation of the most realistic scenario.
Guidelines for architectural strategy
(1) organize
How are subsystems and components organized systematically? What are their composition and responsibilities? How is the system deployed on the network? What types of users and external systems are there? Where are they located and how are they connected?
(2) operate
How do components interact? In which cases is communication synchronous and in which cases asynchronous? How are the various operations of the components coordinated? When can components be configured or diagnostics run on them? How are errors detected, diagnosed, and corrected? condition?
(3) variability
What important features of the system can change as the deployment environment changes? For each feature, which options are supported? When can the choice be made (e.g., compile, link, install, launch, or at runtime)? What are the various points of divergence? What is the correlation between them?
(4) evolution
How is the system designed to support change while maintaining its stability? What specific types of major changes are expected? What are the desirable ways to deal with these changes?
Architectural strategy is like the rudder and keel of a sailboat, determining direction and stability. It should be a short, high-standard statement of direction that must be understandable by all stakeholders and should remain relatively stable throughout the life of the system.
iii. The connection between model and structure
The connection between the value model and the software architecture is clear and logical, and can be expressed in the following 9 points.
(1) Software-intensive products and systems exist to provide value.
(2) Value is a scalar quantity that combines an understanding of marginal utility with the relative importance of many different goals. Goal trade-off is an extremely important issue.
(3) Value exists at multiple levels, some of which include the target system as a value provider. The value models used in these areas encompass the key drivers of software architecture.
(4) Value models at higher levels in the hierarchy can cause changes to the value models below them. This is an important basis for formulating system evolution principles.
(5) For each value group, the value model is homogeneous. Value contexts exposed to different environmental conditions have different expected values.
(6) System development sponsors have different priorities for meeting the needs of different value contexts.
(7) Architectural challenges arise from the impact of environmental factors on expectations within a context.
(8) An architectural approach attempts to maximize value by overcoming the highest priority architectural challenges first.
(9) The architectural strategy is synthesized from the highest priority architectural approach by summarizing common rules, policies, and organizational principles, operations, changes, and evolution.
三、 Application architecture
I. summary
The main content of the application architecture is to plan the target application hierarchical domain architecture, plan the target application domain, application group and target application components according to the business architecture, and form a logical view and system view of the target application architecture.
From a functional perspective, it explains how each application component and the application architecture as a whole can achieve the organization's high-level IT needs, and describes the interaction between the main target application components.
II. The basic principle
1. Business Adaptability Principle
The application architecture should serve and enhance business capabilities and be able to support the organization's business or technology development strategic goals. At the same time, the application architecture should have a certain degree of flexibility and scalability to adapt to changes brought about by future business architecture development.
2. Apply the aggregation principle
Based on the existing system functions, by integrating department-level applications, we can solve the problems of multiple application systems, scattered functions, overlapping, and unclear boundaries, and promote the construction of an organizationally centralized "organization-level" application system.
3. functional specialization principle
Carry out application planning according to the aggregation of business functions and build application systems corresponding to application components. system to meet the needs of different business lines and achieve professional development.
4. risk minimization principle
Reduce the coupling between systems, improve the independence of a single application system, reduce the interdependence between application systems, maintain loose coupling between system levels and system groups, avoid single point risks, reduce system operation risks, and ensure the reliability of application systems. safe and stable.
5. Asset reuse principle
Encourage and promote the refining and reuse of architectural assets to meet the requirements of rapid development and reduced development and maintenance costs. Plan organization-level shared applications to become basic services, establish a standardized system, and reuse and share them within the organization. At the same time, by reusing services or combining services, the architecture is flexible enough to meet the differentiated business needs of different business lines and support the sustainable development of the organization's business.
III. hierarchical grouping
The purpose of layering the application architecture is to achieve separation of business and technology, reduce the coupling between each layer, improve the flexibility of each layer, facilitate fault isolation, and achieve loose coupling of the architecture.
Application layering can reflect customer-centric system services and interaction patterns, and provide a customer service-oriented application architecture view.
The purpose of grouping applications is to reflect the classification and aggregation of business functions, and to cohere closely related applications or functions into a group, which can guide the construction of application systems, achieve high cohesion within the system, low coupling between systems, and reduce duplication of construction. .
Schematic diagram of the application architecture of a city’s social insurance smart management center
Planning is divided into four categories
(1) Governance channels
1||| Mobile App
It mainly provides managers at all levels with visual views of various important topics, popular topics, business topics and big data topics, important indicator monitoring, index analysis, performance evaluation, trend analysis, command and dispatch and other application functions to realize different application scenarios of the governance center. Simultaneous review of social security development data, business environment index and related reports.
2||| Desktop app
For managers in various business fields and information departments of social insurance in a certain city, it provides comprehensive topics in the field of social insurance, business topics in each business field, and humanistic services on big data topics, visual supervision, intelligent supervision, scientific decision-making, and online It can realize smart governance in different application scenarios such as command and dispatch, conference consultation, task distribution, assisted investigation and coordination, and special rectification.
3||| Large screen applications
For social insurance leaders and department managers at all levels in a city, it provides visual presentations of important indicator monitoring, decision analysis, performance evaluation, trend analysis, etc. on various important topics, hot topics, business topics, and big data topics.
(2) Governance Center System
Mainly focus on displaying three major categories of interactive themes
1||| business topic
Including employment and entrepreneurship, social insurance, labor rights protection, personnel management, talent services, personnel and professional examinations, administrative approval, telephone consultation services, social security cards, etc.
2||| comprehensive topics
Including macro decision-making, command and dispatch, integrity and risk control, fund management, career development, business environment, poverty alleviation tracking, service monitoring, public opinion monitoring, business style supervision, performance evaluation, event management, electronic licenses, standard management, benefiting people and farmers wait.
3||| big data theme
Including social security portraits, social security portfolios, social security credit scores, social security maps, social security atlases, social security fund actuarial, social security index evaluation, social security panoramic analysis, etc.
(3) Governance supporting system
Provide four major types of applications for the smart governance center of this project
1||| Data support applications
Including data aggregation system, data governance system, and data application system.
2||| Linked service applications
Including command and dispatch management system, precision poverty alleviation management system, labor rights protection early warning management system, electronic certificate system, and user portrait system.
3||| Governance and supervision applications
Including standard management system, business style supervision system, credit management system, fund actuarial analysis system, and service efficiency evaluation system.
4||| Display interactive applications
Includes mobile governance app.
(4) Relevant system modifications
It mainly involves the transformation of the labor and employment system, social insurance system, labor relations system, personnel and talent system, public service system, risk prevention and control system, etc. related to the data collection, display, and interaction of topics related to this smart governance center.
四、 data architecture
I. summary
Data architecture describes the structure of an organization's logical and physical data assets and related data management resources.
The main content of data architecture involves architectural planning under the entire life cycle of data, including data generation, circulation, integration, application, archiving and destruction.
Data architecture focuses on the characteristics of data being manipulated in the life cycle of data and is related to concepts in the data field such as data type, data volume, development of data technology processing, and data management and control strategies.
II. Development and evolution
1. The era of monolithic application architecture
In the early days of informatization (the 1980s), when informatization was initially under construction, information systems were mainly single applications. For example: Current financial software, OA office software, etc. During this period, the concept of data management was still in its infancy, and the data architecture was relatively simple. It mainly consisted of data model and database design, which were sufficient to meet the business needs of the system.
2. Data warehouse era
With the use of information systems, system data are gradually accumulated. At this time, people discovered that data was valuable to the organization, but the fragmented system led to the creation of a large number of information islands, which seriously affected the organization's use of data. As a result, a new subject-oriented, integrated architecture for data analysis was born, which is the data warehouse.
Different from traditional relational databases, the main application of data warehouse systems is OLAP, which supports complex analysis operations, focuses on decision support, and provides intuitive and easy-to-understand query results. At this stage, the data architecture not only focuses on the data model, but also on the distribution and flow of data.
3. Big Data Era
The rise of big data technology allows organizations to use their own data more flexibly and efficiently and extract more important value from the data. At the same time, driven by the needs of big data applications, various big data architectures are also constantly developing and evolving, from batch processing to stream processing, from centralized to distributed, from batch-stream integration to full real-time.
III. The basic principle
The design principle of data architecture is to follow the general principles of architecture design and have special considerations for the data architecture itself.
Reasonable data architecture design should solve the following problems: rationality of function positioning, scalability for future development, efficient processing or cost-effectiveness; reasonable data distribution and data consistency.
Basic principles include
1. Data tiering principles
2. Data processing efficiency principles
3. Data consistency principle
4. Data Architecture Scalability Principles
(1) Based on the rationality principle of hierarchical positioning.
(2) The scalability of the architecture requires consideration of the data storage model and data storage technology.
5. Serving Business Principles
IV. Architecture example
Schematic representation of the data architecture of a city’s social insurance smart management center.
Main data repositories include
(1) Source database
The source database is the source of data required by a city’s social insurance smart governance center
(2) exchange library
Use synchronization tools such as OGG or synchronize the source database with data synchronization, service calls, etc. Step into the exchange database and use data synchronization or mirroring to reduce the impact on the source database.
(3) Transition library
The data in the exchange library is extracted through OGG For Bigdata to extract variable data, Sqoop extraction, push, import, etc., and stored in the transition library in the Hadoop platform to improve the performance of large batch data processing.
(4) Integrated library
Compare, convert, clean, and aggregate the data in the transition database, and store it in a unified database table structure. In the integrated database, incremental data sources and full data sources are provided for each subject database.
(5) Theme library
That is, the service library extracts the required data from the integrated library according to the application requirements of the governance theme to provide governance Provide support for theoretical applications and visual presentation.
五、 Technology Architecture
I. summary
Technical architecture is the basis for carrying an organization's application architecture and data architecture. It is a whole composed of multiple functional modules. It describes the technical system or combination used to implement the organization's business applications, as well as the infrastructure and environment required to support the deployment of application systems. wait.
Technical architecture requires overall consideration and unified planning. An IT technical architecture that lacks overall strategies and ideas will lead to serious waste of investment and delays in construction. The overall function will be defeated by the weakest link, making IT a bottleneck for business development.
II. The basic principle
1. Maturity Control Principles
2. technical consistency principle
3. local replaceability principle
4. Talent skill coverage principle
5. Innovation driven principles
III. Architecture example
Schematic representation of the technical architecture of a city’s social insurance smart management center.
This project adopts today's advanced and mature technical architecture and routes to ensure the advancement, efficiency, reliability and scalability of a city's social insurance smart governance center.
The technical architecture is designed according to a classification and layering approach, including
(1) technical standard
Follow international and domestic standards such as J2EE, HTML5, CSS3, SQL, Shell, XML/JSON, HTTP, FTP, SM2, SM3, JavaScript, etc.
(2) basic support
1||| Relying on 5G network and Internet of Things to provide basic network support for this project;
2||| Relying on application middleware to provide application deployment support for this project;
3||| Relying on distributed cache, memory database, MPP database, and transaction database to provide basic data storage support for this project;
4||| Relying on the Hadoop platform to provide distributed storage and computing environment support for this project;
5||| Relying on search engine and rule engine components to provide technical component support for governance applications.
(3) application framework technology
It is a technology that needs to be strictly followed and adopted in application system development.
The application framework adopts a layered design, including access layer, interaction control layer, business component layer, and resource access layer.
(4) Application integration technology
Including single sign-on, service bus (ESB), process engine, message queue and other technologies to support the integration between various application systems.
(5) Data integration technology
Including ETL tools, data synchronization replication tools, data indexing, SQL/stored procedures, MapReduce/Spark computing engines and other technologies, it provides data collection, data cleaning, data conversion, data processing, data mining, etc. for a city's social insurance smart governance center. Provide technical support for work.
(6) data analysis technology
Including BI engine, report engine, GIS engine, chart component, 3D engine, multi-dimensional modeling engine, AI algorithm package, data mining algorithm package and other big data technologies, providing social security maps, remote command and dispatch, panoramic analysis, macro decision-making, monitoring and supervision Provide technical support for visualization of other applications.
(7) Operation and maintenance technology
Including operation traces, fault warning, energy efficiency monitoring, log collection, vulnerability scanning, application Use monitoring, network analysis and other technologies to support the standardized operation and maintenance of application systems.
六、 Network Architecture
I. The network is the foundation of the information technology architecture. It is not only a channel for users to request and obtain IT information resource services, but also a hub for the integration and scheduling of various resources in the information system architecture.
II. The basic principle
1. High reliability
As a hub and channel for underlying resource scheduling and service transmission, the network has natural requirements for high reliability. It goes without saying.
2. High security
The security of information systems cannot only rely on application-level security guarantees. The network must also be able to provide basic security protection. The underlying identity authentication, access control, intrusion detection capabilities, etc. need to be able to provide important security guarantees for applications.
3. high performance
The network is not only a channel for service delivery, but also a hub for resource scheduling required to provide services. Therefore, network performance and efficiency are the guarantee for providing better service quality.
4. Manageability
It not only refers to the management of the network itself, but also refers to the rapid adjustment and control of the network based on business deployment strategies.
5. Platform and architecture
As the underlying basic resource, the network needs a broad vision to adapt to the future application architecture. In response to changes, the network itself can become more flexible and expand on demand to adapt to changes and developments in different business scales in the future.
III. LAN architecture
A LAN refers to a computer local area network, a dedicated computer network owned by a single organization.
Features include:
① The geographical coverage is small, usually limited to a relatively independent range, such as a building or a concentrated building group (usually within 2.5km);
②High data transmission rate (generally above 10Mb/s, typically 1Gb/s, or even 10Gb/s);
③Low bit error rate (usually below 10°), high reliability;
④Supports multiple transmission media and supports real-time applications. As far as network topology is concerned, there are bus, ring, star, tree and other types.
In terms of transmission media, it includes wired LAN and wireless LAN.
A local area network usually consists of computers, switches, routers and other equipment.
The LAN not only provides layer 2 switching functions, but also provides complex networks with layer 3 routing functions.
Architecture type
1. Single core architecture
A single-core LAN usually consists of a core layer 2 or layer 3 switching device as the core device of the network, and user devices (such as user computers, smart devices, etc.) are connected to the network through several access switching devices.
This type of LAN can be connected to the WAN through interconnection routing equipment (border routers or firewalls) connecting the core network switching equipment and the WAN to achieve business access across the LAN.
Single core network has the following characteristics:
1||| Core switching equipment usually uses Layer 2, Layer 3 and above switches; if Layer 3 or above switches are used, they can be divided into VLANs. Layer 2 data link forwarding is used within the VLAN, and Layer 3 routing is used for forwarding between VLANs;
2||| The access switching device uses a Layer 2 switch, which only implements Layer 2 data link forwarding;
3||| Ethernet connections such as 100M/GE/10GE (1GE=1Gb/s) can be used between core switching equipment and access equipment.
The advantage of using a single core to build a network is that the network structure is simple and equipment investment can be saved. It is more convenient for sub-organizations that need to use the LAN to access. They can directly connect to the idle interface of the core switching device through the access switching device. Its shortcomings are that the geographical scope of the network is limited, and the sub-organizations that require the use of LAN are relatively compact; the core network switching equipment has a single point of failure, which can easily lead to overall or partial failure of the network; the network expansion capability is limited; there are many switching devices connected to the LAN In this case, the port density of core switching equipment is required to be high.
As an alternative, for smaller-scale networks, user equipment using this network architecture can also be directly interconnected with core switching equipment, further reducing investment costs.
2. Dual core architecture
Dual-core architecture usually refers to the core switching equipment using three-layer and above switches.
Ethernet connections such as 100M/GE/10GE can be used between core switching equipment and access equipment. When dividing VLANs within the network, access between each VLAN must be completed through two core switching equipment. Only the core switching equipment in the network has the routing function, and the access equipment only provides the Layer 2 forwarding function.
Core switching devices are interconnected to achieve gateway protection or load balancing. The core switching equipment has protection capabilities and the network topology is reliable. Hot switching can be implemented in service routing and forwarding. For mutual access between the LANs of various departments connected to the network, or for accessing core business servers, there are more than one path to choose from, with higher reliability.
It is more convenient for special organizations that need to use the LAN to access. They can directly connect to the idle interface of the core switching device through the access switching device. Equipment investment is higher than that of single-core LAN. The port density requirements for core switching equipment are relatively high. All business servers are connected to two core switching devices at the same time and protected through the gateway protection protocol to provide high-speed access to user equipment.
3. ring architecture
A ring LAN is composed of multiple core switching devices connected into dual RPR (Resilient Packet Ring) dynamic elastic packet rings to build the core of the network.
Core switching equipment usually uses three-layer or above switches to provide business forwarding functions.
Each VLAN in a typical ring LAN network realizes mutual access through the RPR ring. RPR has a self-healing protection function to save optical fiber resources; it has the ability of 50ms self-healing time at the MAC layer, and provides multi-level, reliable QoS services, bandwidth fairness mechanism and congestion control mechanism. The RPR ring is available in both directions. The network forms a ring topology through two reverse optical fibers, and nodes on the ring can reach another node from two directions. Each fiber can transmit data and control signals simultaneously. RPR uses spatial reuse technology to effectively utilize the bandwidth on the ring.
When building a large-scale LAN through RPR, multiple rings can only communicate with each other through service interfaces and cannot achieve direct network communication. The investment in ring LAN equipment is higher than that of a single-core LAN. Core routing redundancy design is difficult to implement and can easily form loops. This network accesses the WAN through border routing devices that interconnect switching devices on the ring.
4. Hierarchical LAN architecture
Hierarchical LAN (or multi-layer LAN) consists of core layer switching equipment, aggregation layer switching equipment, access layer switching equipment and User equipment and other components
The core layer equipment in the hierarchical LAN model provides high-speed data forwarding functions; the sufficient interfaces provided by the aggregation layer equipment realize mutual access control with the access layer, and the aggregation layer can provide different access devices under its jurisdiction (within the department LAN) The service switching function reduces the forwarding pressure on core switching equipment; the access layer equipment realizes access to user equipment. Hierarchical LAN network topology is easy to expand. Network faults can be classified and troubleshooted to facilitate maintenance. Usually, the hierarchical LAN is connected to the WAN through the border routing device with the WAN to realize mutual access of LAN and WAN services.
IV. WAN architecture
A wide area network is a network that connects computer equipment distributed over a wider area than a local area network.
The WAN consists of a communication subnet and a resource subnet. Communication subnets can be constructed using public packet switching networks, satellite communication networks and wireless packet switching networks to interconnect local area networks or computer systems distributed in different areas to realize the sharing of resource subnets.
WAN is a multi-level network, usually composed of backbone network, distribution network and access network. When the network scale is small, it can only consist of a backbone network and an access network. When planning the WAN, it is necessary to select the third-level network functions based on the business scenario and network scale. For example, planning a provincial bank's wide area network and designing a backbone network to support data, voice, image and other information sharing to provide high-speed, reliable communication services for the entire bank system; designing a distribution network to provide data exchange between the data center and branches and sub-branches. Provide long-distance line reuse and backbone access; design the access network to provide access routing when exchanging data between branches and business outlets to achieve outlet line reuse and terminal access.
Architecture type
1. Single core wide area network
A single-core WAN usually consists of a core routing device and each LAN, as shown in Figure 4-13. The core routing equipment uses layer 3 and above switches. Access between LANs within the network requires core routing equipment.
There are no other routing devices between LANs in the network. Broadcast lines are used between each local area network and the core routing equipment. The interconnection interfaces between routing equipment and each LAN belong to the corresponding LAN subnet. The core routing equipment and each local area network can be connected using 10M/100M/GE Ethernet interfaces. This type of network has a simple structure and saves equipment investment. Each LAN accesses the core LAN and each other with high efficiency. It is more convenient for the new department LAN to connect to the WAN, as long as the core routing equipment has ports. However, there is a risk that a single point of failure in the core routing equipment can easily lead to the failure of the entire network. The network expansion capability is poor and the port density of core routing equipment is required to be high.
2. Dual core WAN
A dual-core WAN usually consists of two core routing devices and each LAN, as shown in Figure 4-14.
The main feature of the dual-core WAN model is that the core routing equipment usually uses three-layer and above switches. The core routing equipment is usually connected to each local area network through Ethernet interfaces such as 10M/100M/GE. Access between LANs in the network requires two core routing devices. There are no other routing devices between LANs for business access. Implement gateway protection or load balancing between core routing devices. There are multiple path options for each LAN to access the core LAN and to each other, with higher reliability. Hot switching can be achieved at the routing level, providing business continuity access capabilities. When the core routing equipment interface is reserved, the new LAN can be easily accessed. However, the equipment investment is higher than that of a single-core WAN. The routing redundancy design of core routing equipment is difficult to implement and can easily form routing loops. The network has higher port density requirements for core routing equipment.
3. Ring wide area network
A ring wide area network usually uses more than three core router devices to form a routing loop to connect various local area networks and realize mutual access of WAN services.
The main feature of a ring wide area network is that the core routing equipment usually uses three-layer or above switches. The core routing equipment and each local area network are usually connected through Ethernet interfaces such as 10M/100M/GE. Access between LANs within the network needs to pass through a ring formed by core routing devices. There are no other routing devices for mutual access between the LANs. Core routing devices are equipped with gateway protection or load balancing mechanisms, as well as loop control functions. Each LAN has multiple paths to choose from to access the core LAN or each other, with higher reliability. Seamless hot switching can be achieved at the routing level to ensure continuity of business access.
When the core routing equipment interface is reserved, the new department LAN can be easily accessed. However, the equipment investment is higher than that of dual-core WAN, and the routing redundancy design of core routing equipment is difficult to implement, and routing loops are easily formed. The ring topology needs to occupy more ports, and the network has higher port density requirements for core routing equipment.
4. Semi-redundant WAN
A semi-redundant WAN is formed by multiple core routing devices connecting various LANs, as shown in Figure 4-16. Among them, any core routing device has at least two or more links connected to other routing devices. A special case of a semi-redundant WAN is a fully redundant WAN if there is a link between any two core routing devices.
The main features of semi-redundant WAN are flexible structure and easy expansion. Some network core routing devices can adopt gateway protection or load balancing mechanisms or have loop control functions. The network structure is mesh-like, and there are multiple paths for each LAN to access the core LAN and each other, with high reliability. Routing selection at the routing level is more flexible. The network structure is suitable for deploying link state routing protocols such as OSPF. However, the network structure is fragmented and difficult to manage and troubleshoot.
5. Peer-to-Peer Subdomain WAN
The peer-to-peer subdomain network divides the routing equipment of the WAN into two independent subdomains, and each subdomain routing equipment is interconnected in a semi-redundant manner. The two subdomains are interconnected through one or more links, and any routing device in the peer subdomain can access the LAN, as shown in Figure 4-17.
The main feature of the peer-to-peer subdomain WAN is that mutual access between peer-to-peer subdomains is dominated by interconnection links between peer-to-peer subdomains. Route summary or detailed route entry matching can be achieved between peer sub-domains, and route control is flexible. Generally, the bandwidth of links between subdomains should be higher than the bandwidth of links within subdomains. Inter-domain routing redundancy design is difficult to implement and can easily form routing loops or run the risk of publishing illegal routes. The routing performance requirements of domain border routing equipment are relatively high. The routing protocols in the network are mainly dynamic routing. Peer-to-peer subdomains are suitable for scenarios where the WAN can be clearly divided into two areas and access within the areas is relatively independent.
6. hierarchical subdomain wide area network
The hierarchical subdomain WAN structure divides large WAN routing equipment into multiple relatively independent subdomains. The routing equipment in each subdomain is interconnected in a semi-redundant manner. There is a hierarchical relationship between multiple subdomains. High-level subdomains connect multiple subdomains. Low-level subdomains. Any routing device in the hierarchical subdomain can access the LAN, as shown in Figure 4-18.
The main feature of hierarchical subdomains is that the hierarchical subdomain structure has better scalability. Mutual access between low-level subdomains needs to be completed through high-level subdomains. The implementation of inter-domain routing redundancy design is difficult, it is easy to form routing loops, and there is a risk of publishing illegal routes. The link bandwidth between sub-domains must be higher than the link bandwidth within the sub-domain. The routing and forwarding performance requirements of domain border routing devices used for domain mutual access are relatively high. The routing protocols of routing devices are mainly dynamic routing, such as OSPF protocol. The interconnection between hierarchical subdomains and the upper-level external network is mainly completed with the help of high-level subdomains; the interconnection with the lower-level external network is mainly completed with the help of low-level subdomains.
V. Mobile communication network architecture
Mobile communication networks provide strong support for the mobile Internet, especially 5G networks, which provide diversified services for individual users, vertical industries, etc.
Common 5G business applications include:
1. 5GS (5G System) and DN (Data Network) interconnection
When 5GS provides services to mobile terminal users (User Equipment, UE), it usually requires DN network, such as Internet, IMS (IP Media Subsystem), private network and other interconnections to provide the required services for the UE. UPF network elements in 5GS serve as the access point of DN for various Internet, voice, AR/VR, industrial control and driverless services. 5GS and DN are interconnected through the N6 interface defined by 5GS, as shown in Figure 4-19.
5G Network belongs to the 5G category and includes several network functional entities, such as AMF/SMF/PCF/NRF/NSSF, etc. For the sake of simplicity, only network functional entities closely related to user sessions are marked in the figure.
When 5GS and DN are interconnected based on IPv4/IPv6, from the DN perspective, UPF can be regarded as an ordinary router. On the contrary, from the perspective of 5GS, the devices interconnected with UPF through the N6 interface are usually routers. In other words, there is a routing relationship between 5GS and DN. The service flow of the UE accessing the DN is forwarded between them through bidirectional routing configuration. As far as the 5G network is concerned, the service flow flowing from UE to DN is called uplink (UL, UpLink) service flow; the service flow flowing from DN to UE is called downlink (DL, DownLink) service flow. The UL service flow is forwarded to the DN through the route configured on the UPF; the DL service flow is forwarded to the UPF through the route configured on the router adjacent to the UPF.
In addition, from the perspective of how UE accesses the DN through 5GS, there are two modes:
(1) Transparent mode
In transparent mode, 5GS connects directly to the operator's specific IP network through the N6 interface of UPF, and then connects to the DN (ie, external IP network), such as Internet, through a firewall or proxy server. UE allocation is planned by the operator IP address in the network address space. When the UE initiates a session establishment request to 5GS, usually 5GS does not trigger the authentication process to the external DN-AAA server, as shown in Figure 4-20.
In this mode, 5GS provides at least a basic ISP service for UE. For 5GS, it only needs to provide basic Tunnel QoS flow service is enough. When a UE accesses an intranet network, UE-level configuration is only completed independently between the UE and the intranet network, which is transparent to 5GS.
(2) Non-transparent mode
In non-transparent mode, 5GS can directly access the intranet/ISP or access through other IP networks (such as the Internet) Intranet/ISP. For example, if 5GS accesses the Intranet/ISP through Internet, it is usually necessary to establish a dedicated tunnel between the UPF and the Intranet/ISP to forward the UE's access to the Intranet/ISP service. The UE is assigned an IP address belonging to the Intranet/ISP address space. This address is used for forwarding UE services in UPF and Intranet/ISP, as shown in Figure 4-21.
To sum up, UE accesses the intranet/ISP service server through 5GS, which can be done based on any network such as the Internet. Even if it is unsafe, it does not matter. Data communication protection can be based on a certain security protocol between UPF and Intranet/ISP. . The security protocol used is negotiated between the mobile operator and the intranet/ISP provider.
As part of the UE session establishment, SMF in 5GS usually initiates authentication of the UE by initiating an external DN-AAA server (such as Radius, Diameter server). After the UE is successfully authenticated, the establishment of the UE session can be completed, and then the UE can access Internet/ISP services.
2. 5G network edge computing
5G networks change the previous device- and business-centered orientation and advocate a user-centered concept. While 5G networks provide services to users, they also pay more attention to users’ service experience QoE (Quality of Experience). Among them, the provision of 5G network edge computing capabilities is one of the important measures to empower vertical industries and improve user QoE.
The Mobile Edge Computing (MEC) architecture of the 5G network is shown in Figure 4-22. It supports the deployment of 5G UPF network elements at the edge of the mobile network close to the end user UE, combined with the deployment of an edge computing platform (Mobile Edge Platform) at the edge of the mobile network. ,MEP), provides vertical industries with nearby business offloading services characterized by time-sensitive and high bandwidth. Therefore, on the one hand, it provides users with an excellent service experience, and on the other hand, it reduces the pressure on mobile network back-end processing.
The operator's own application or third-party application AF (Application Function) triggers the 5G network to dynamically generate local offloading policies for edge applications through the capability opening function network element NEF (Network Exposure Function) provided by 5GS, which is controlled by PCF (Policy Charging Function) Configure these policies to the relevant SMF. The SMF dynamically implements UPF (that is, the UPF deployed in the mobile edge cloud) insertion or removal in the user session based on the end-user location information or location change information after the user moves, and offloads these UPFs. Dynamic configuration of rules achieves excellent results for users to access required services.
In addition, from the perspective of business continuity, the 5G network can provide SSC mode 1 (the IP access point of the user session remains unchanged during the user's movement), SSC mode 2 (the network triggers the release of the user's existing session during the user's movement) and immediately triggers the establishment of a new session), SSC mode 3 (establishes a new session before releasing the user's existing session during user movement) for the service provider ASP (Application Service Provider) or operator to choose.
VI. software defined network
See Section 5 of Chapter 2, Section 2.1.2 of this book.
七、 security architecture
I. security threats
Currently, organizations are hosting more businesses on hybrid clouds, making it more difficult to protect user data and businesses. The complex environment composed of local infrastructure and multiple public and private clouds has made users more aware of hybrid cloud security. High requirements. This popularization and application will have two effects:
① The business operations of all walks of life are almost entirely dependent on computers, networks and cloud storage. Various important data such as government documents, archives, bank accounts, corporate business and personal information will all rely on the storage and transmission of computers and networks;
② People have a more comprehensive understanding of computers, and more computer technologies are illegally used by higher-level people, who use various means to steal or attack information resources.
At present, the threats that information systems may suffer can be summarized as It is divided into the following 4 aspects
1. For information systems, threats can target the physical environment, communication links, network systems, operating systems, application systems, and management systems.
2. Physical security threats refer to threats to the equipment used in the system, such as natural disasters, power failure, operating system boot failure or loss of database information, equipment being stolen/destroyed resulting in data loss or information leakage;
3. Communication link security threats refer to installing eavesdropping devices on transmission lines or interfering with communication links;
4. Network security threats refer to the fact that due to the openness and internationalization of the Internet, people can easily steal Internet information through technical means, posing a serious security threat to the network;
5. Operating system security threats refer to threats implanted in the software or hardware chips in the system platform, such as "Trojan horses", "trap doors", and universal passwords in BIOS;
6. Application system security threats refer to threats to the security of network services or user business systems, and are also threatened by "Trojan horses" and "trap doors";
7. Management system security threats refer to man-made security vulnerabilities caused by negligence in personnel management, such as stealing computer information through artificial copying, taking pictures, transcribing and other means.
Common security threats include:
(1) information leakage
Information is leaked or disclosed to an unauthorized entity.
(2) Destroy the integrity of information
Data is lost due to unauthorized addition, deletion, modification or destruction.
(3) Denial of service
Legitimate access to information or other resources is unconditionally blocked.
(4) Illegal access (unauthorized access)
A resource is used by an unauthorized person or in an unauthorized manner.
(5) tapping
Use all possible legal or illegal means to steal information resources and sensitive information in the system. For example, monitoring signals transmitted in communication lines, or using electromagnetic leakage generated by communication equipment during operation to intercept useful information.
(6) business flow analysis
Through long-term monitoring of the system, statistical analysis methods are used to analyze such factors as communication frequency and communication Conduct research on changes in information flow and total communication volume to discover valuable information and patterns.
(7) Counterfeit
By deceiving the communication system (or user), the purpose is to achieve the purpose of illegal users pretending to be legitimate users, or users with low privileges pretending to be users with high privileges. Hackers mostly use impersonation to attack.
(8) Bypass control
An attacker takes advantage of a system's security flaws or vulnerabilities to gain unauthorized rights or privileges. For example, attackers use various attack methods to discover some system "features" that should be kept secret but are exposed. Using these "features", attackers can bypass defenders and penetrate into the system.
(9) License infringement
A person who is authorized to use a system or resource for a certain purpose uses this permission for other unauthorized purposes, also known as an "insider attack."
(10) Trojan horse
The software contains an undetectable or harmless program segment that, when executed, destroys User safety. This kind of application is called a Trojan Horse.
(11) trap door
A "chassis" is set up in a system or component to allow violations of security policies when specific input data is provided.
(12) deny
This is an attack from the user, for example, denying a message they have posted or forging a letter from the other party.
(13) replay
A legitimate communication data backup that was intercepted and retransmitted for illegal purposes.
(14) computer virus
The so-called computer virus is a functional program that can infect and infringe during the operation of the computer system. A virus usually has two functions: one is to "infect" other programs; the other is to cause damage or the ability to implant an attack.
(15) personnel malfeasance
An authorized person discloses information to an unauthorized person for money or profit, or due to carelessness.
(16) media scrap
Information is obtained from discarded disks or printed storage media.
(17) physical intrusion
Intruders gain access to a system by bypassing physical controls.
(18) steal
Important security items such as tokens or ID cards are stolen.
(19) business deception
A fake system or system component that deceives legitimate users or systems into voluntarily giving up sensitive information.
II. Definition and Scope
Security architecture is a subdivision that focuses on information system security at the architectural level.
Security is reflected in information systems. The usual system security architecture, security technology architecture and audit architecture can form three lines of security defense.
1. System security architecture
System security architecture refers to the main components that build the security quality attributes of information systems and the relationships between them.
The goal of system security architecture is how to build its own security from the source without relying on external defense systems.
2. Security technology architecture
Security technology architecture refers to the main components of building a security technology system and the relationships between them.
The task of the security technology architecture is to build a general security technology infrastructure, including security infrastructure, security tools and technologies, security components and support systems, etc., to systematically enhance the security defense capabilities of each part.
3. Audit architecture
The audit structure refers to the independent audit department or the risk discovery capabilities it can provide. The scope of audit mainly includes all risks including security risks.
When people design a system, they usually need to identify the security threats that the system may encounter. By conducting a reasonable evaluation of the security threats faced by the system and implementing corresponding control measures, effective and reasonable security technologies are proposed to form a security method that improves the security of the information system. Solution is the fundamental goal of security architecture design. In practical applications, security architecture design can be considered from the perspective of security technology, such as encryption and decryption, network security technology, etc.
III. Overall architecture design
一、 The framework of the information security assurance system should include three parts: technical system, organizational system and management system. In other words, people, management and technical means are the three major elements of information security architecture design, and building a dynamic information and network security assurance system framework is the guarantee for achieving system security.
二、 In response to network security protection issues, various countries have proposed multiple network security system models and architectures, such as PDRR (Protection/Detection/Reaction/Recovery, Protection/Detection/Response/Recovery) model, P2DR model (Policy/Protection/Detection/Response, Security Policy/Protection/Detection/Response).
三、 WPDRRC model
WPDRRC (Waring/Protect/Detect/React/Restore/Counterattack) is an information system security assurance system construction model proposed by my country's information security expert group.
WPDRRC is based on the PDRR information security system model and adds early warning and counterattack functions.
In the PDRR model, the concept of security has expanded from information security to information assurance. The connotation of information assurance has gone beyond traditional information security and confidentiality. It is Protect, Detect, React, and Restore. Organic combination. The PDRR model takes information security protection as the basis, treats protection as an active process, and uses detection methods to discover security vulnerabilities and correct them in a timely manner. At the same time, emergency response measures are used to deal with various intrusions. After the system is invaded, corresponding measures must be taken to restore the system to its normal state. Only in this way can the security of information be fully guaranteed. This model emphasizes the automatic fault recovery capability.
The six links include: early warning (W), protection (P), detection (D), response (R), recovery (R) and counterattack (C). They have strong timing and dynamics and can better It reflects the early warning, protection, detection, response, recovery and counterattack capabilities of the information system security system.
(1) Early Warning(W)
It mainly refers to using the simulated attack technology provided by the remote security assessment system to check possible exploited weaknesses in the system, collect and test the security risks of the network and information, and report in an intuitive way to provide solution suggestions. .
(2) Protect(P)
Protection usually adopts mature information security technologies and methods to achieve network and information security.
The main contents include encryption mechanism, digital signature mechanism, access control mechanism, authentication mechanism, information hiding and firewall technology, etc.
(3) Detection(D)
Testing is to detect and monitor networks and systems to discover new threats and weaknesses and enforce security policies.
In this process, technologies such as intrusion detection and malicious code filtering are used to form a dynamic detection system and reward reporting coordination mechanism to improve the real-time nature of detection.
The main contents include intrusion detection, system vulnerability detection, data integrity detection and attack detection, etc.
(4) Response(R)
Should mean that after detecting security vulnerabilities and security events, correct responses must be made in a timely manner to adjust the system to a safe state. To this end, corresponding alarm, tracking and processing systems are needed, including blocking, isolation, reporting and other capabilities.
The main contents include emergency strategies, emergency mechanisms, emergency means, intrusion process analysis and security status assessment, etc.
(5) Recovery(R)
It refers to using necessary technical means to restore the system to normal in the shortest possible time after the current network, data, and services are attacked by hackers and damaged or affected.
The main contents include fault tolerance, redundancy, backup, replacement, repair and recovery, etc.
(6) Counterattack(C)
It refers to using all possible high-tech means to detect and extract clues and criminal evidence of computer criminals to form strong evidence-gathering capabilities and legal attack methods.
The three major elements include: people, strategy and technology. People are the core, strategy is the bridge, and technology is the guarantee.
After years of development, network security system models have formed models such as PDP, PPDR, PDRR, MPDRR and WPDRRC. These models have more complete functions in information security prevention.
四、 Architecture design
The security requirements of information systems cannot be solved by any single security technology. To design an information security architecture, an appropriate security architecture model should be selected.
Information system security design focuses on two aspects:
1. System security system
The security assurance system is composed of three levels: security service, protocol level and system unit, and each layer covers the content of security management.
The design work of the system security assurance system mainly considers the following points:
(1) Determination of security zone strategy
Based on the division of security areas, the competent authorities should formulate targeted security strategies. Such as regular audit assessment, installation of intrusion detection system, unified authorization, certification, etc.
(2) Unified configuration and management of antivirus systems
Competent authorities should establish an overall defense strategy to achieve unified configuration and management. Network anti-virus strategies should meet the requirements of comprehensiveness, ease of use, real-time performance and scalability.
(3) Network and Information Security Management
In network security, in addition to adopting some technical measures, it is also necessary to strengthen network and information security management and formulate relevant rules and regulations. In relevant management, any safety guarantee measures must ultimately be implemented in specific management rules and regulations and specific managerial responsibilities, and be realized through the work of managers.
2. Information security architecture
Through a comprehensive understanding of information system applications, the design work of the security system architecture is carried out in accordance with security risks, requirements analysis results, security policies, and network and information security objectives.
Specifically in the safety control system, analysis and design work can be carried out from five aspects
(1) physical security
Ensuring the physical security of various equipment in computer information systems is a prerequisite for ensuring the security of the entire network system.
Physical security is the process of protecting computer network equipment, facilities, and other media from damage caused by environmental accidents such as earthquakes, floods, and fires, as well as human operational errors or errors, and various computer crimes.
Physical security mainly includes: environmental security, equipment security, media security, etc.
(2) system security
System security mainly refers to the security requirements for each component in the information system.
System security is the basis for the overall security of the system.
It mainly includes network structure security, operating system security and application system security.
(3) cyber security
Cybersecurity is key to the overall security solution.
It mainly includes access control, communication confidentiality, intrusion detection, network security scanning and anti-virus.
(4) Application security
Application security mainly refers to the security issues caused by shared resources and information storage operations when multiple users use network systems.
It mainly includes two aspects: resource sharing and information storage.
(5) Security management
Mainly reflected in three aspects: formulating a sound safety management system, building a safety management platform platform to enhance personnel’s safety awareness.
五、 Design Points
I. Key points of system security design
The field of network structure security focuses on whether the network topology is reasonable, whether lines are redundant, whether routing is redundant and preventing single points of failure.
Operating system security focuses on two aspects: ① Measures that can be taken to prevent operating system security, such as: try to use a more secure network operating system and make necessary security configurations, close some applications that are not commonly used but have security risks, Use permissions to restrict or strengthen the use of passwords, etc. ② By equipping the operating system security scanning system to conduct security scans on the operating system, discover vulnerabilities, and upgrade in a timely manner.
In terms of application system security, focus on the application server and try not to open some infrequently used protocols and protocol ports, such as file services, email servers, etc. You can turn off services such as HTTP, FTP, Telnet, etc. on the server. Login identity authentication can be strengthened to ensure the legality of user use.
II. Network security design essentials
Isolation and access control must have a strict control system, and a series of management measures such as "User Authorization Implementation Rules", "Password and Account Management Specifications", and "Permission Management Formulation" can be formulated.
Equipping a firewall is the most basic, economical and effective security measure in network security. Isolation and access control between internal and external networks or different trust domains in the internal network are achieved through strict security policies of the firewall. The firewall can implement one-way or two-way control, and implement finer-grained access control for some high-level protocols.
Intrusion detection requires real-time monitoring and recording of all operations in and out of the network segment based on the information codes of existing and latest attack methods, and implementing responses (blocking, alarming, and sending E-mails) according to established strategies. This prevents attacks and crimes against the network. Intrusion detection systems generally include a console and detectors (network engines). The console is used to formulate and manage all detectors (network engines). The network engine is used to monitor access behaviors in and out of the network and execute corresponding behaviors according to the instructions of the console.
Virus protection is a necessary means of network security, because in the network environment, computer viruses have immeasurable threats and destructive power. The operating system (such as Windows system) used in network systems is prone to virus infection. Therefore, the prevention of computer viruses is also one of the important aspects that should be considered in network security construction. Anti-virus technology includes three types: virus prevention, virus detection and anti-virus.
III. Application security design essentials
Resource sharing must strictly control the use of network shared resources by internal employees. Generally, shared directories should not be easily opened in internal subnets, otherwise important information may be leaked due to negligence when exchanging information among employees. For users who need to frequently exchange information, a necessary password authentication mechanism must be added when sharing, that is, only through password authentication can access to data be allowed.
Information storage refers to user hosts that involve secret information. Users should try to open as few uncommon network services as possible during the application process. Make a secure backup of the database in the data server. The database can be backed up and stored remotely through the network backup system.
IV. Key points of safety management design
Establishing and improving a security management system will be an important guarantee for the realization of network security. You can formulate security operation procedures, reward and punishment systems for security incidents according to your actual situation, and appoint security managers to be fully responsible for supervision and guidance.
Building a security management platform will reduce many risks caused by unintentional human factors. Building a security management platform can provide technical protection, such as forming a security management subnet, installing centralized and unified security management software, network equipment management systems, and network security equipment unified management software, etc., to achieve security management of the entire network through the security management platform.
Network security prevention awareness training should be conducted frequently for unit employees to comprehensively improve employees’ awareness of overall security methods.
六、 Architecture example
The safety control system here refers to a system that can provide a highly reliable safety protection method, which can avoid the unsafe state of related equipment to the greatest extent, prevent the occurrence of vicious accidents or reduce losses as much as possible after the accident, and protect production. device and most importantly personal safety.
The architecture adopts a traditional hierarchical structure, which is divided into data layer, function layer and presentation layer. The data layer mainly manages organizational data in a unified manner and stores, isolates and protects it according to different security characteristics of the data. The functional layer is the main core function of system security prevention, including availability monitoring, service support and security monitoring. Availability monitoring mainly implements monitoring capabilities in network security, system security and application security; the business process in service support includes security management design and realizes most functions of operation and maintenance management in a security management environment; security monitoring mainly focuses on the system Any unsafe phenomena discovered will be dealt with accordingly, covering threat tracing, security domain audit assessment, authorization, certification, etc., as well as risk analysis and assessment. The presentation layer mainly completes the implementation of various types of user application functions including the use, maintenance, decision-making, etc. of the security architecture.
IV. Network security architecture design
i. The purpose of establishing an information system security system is to combine universal security principles with the reality of information systems to form a security architecture that meets the security needs of information systems. The network security system is one of the cores of the information system system.
ii. System security system
1. OSI security architecture
OSI (Open System Interconnection/Reference Mode, OSI/RM) is an open communication system interconnection model (ISO 7498-2) formulated by an international standards organization. The national standard GB/T9387.2 "Basic Reference for Information Processing System Open System Interconnection" Model Part 2: Security Architecture" is equivalent to ISO 7498-2.
The purpose of OSI is to ensure the secure exchange of information over long distances between open system processes. These standards establish some guiding principles and constraints within the framework of a reference model, thereby providing a consistent approach to solving security issues in open interconnected systems.
OSI defines a 7-layer protocol, in which each layer except layer 5 (session layer) can provide corresponding security services.
It is most suitable to configure security services on the physical layer, network layer, transport layer and application layer. It is not suitable to configure security services on other layers.
The five types of security services of the OSI open system interconnection security system include authentication, access control, data confidentiality, data integrity and non-repudiation.
OSI defines a layered multi-point security technology architecture, also known as a defense-in-depth security technology architecture, which distributes defense capabilities to the entire information system in the following three ways.
(1) Multi-point technical defense
1||| Network and infrastructure:
To ensure availability, LANs and WANs need to be protected against attacks such as denial-of-service attacks. To ensure confidentiality and integrity, the information transmitted over these networks and the characteristics of the traffic need to be protected from unintentional disclosure.
2||| boundary:
To protect against active network attacks, the perimeter needs to provide stronger perimeter defenses such as traffic filtering and control and intrusion detection.
3||| Computing environment:
To protect against internal, closely spaced distributed attacks, hosts and workstations need to provide adequate access controls.
(2) Layered technical defense
To reduce the likelihood and affordability of successful attacks from these attacks, each mechanism should represent a unique barrier and include both protection and detection methods.
For example, using nested firewalls along with intrusion detection at both the external and internal boundaries is an example of layered technology defense.
(3) supporting infrastructure
1||| public key infrastructure
Provides a common federation for securely creating, distributing, and managing public key certificates and traditional symmetric keys, enabling them to provide secure services to networks, perimeters, and computing environments. These services provide reliable verification of the integrity of senders and receivers and prevent unauthorized disclosure and alteration of information. The public key infrastructure must support controlled interoperability and be consistent with the security policies established by each user community.
2||| Detection and response infrastructure
Ability to quickly detect and respond to intrusions. It also provides a "summary" function that makes it easy to observe an event in conjunction with other related events. Additionally, it allows analysts to identify potential behavioral patterns or emerging trends.
The security of information systems not only relies on technology, but also requires non-technical defense methods. An acceptable level of information assurance relies on a combination of people, management, technology and processes.
2. Certification framework
The basic purpose of authentication is to prevent other entities from occupying and independently operating the identity of the authenticated entity.
Authentication provides assurance that an entity claims its identity and is meaningful only in the context of the relationship between the subject and the verifier.
There are two important relational contexts for identification:
①The entity is represented by the applicant, and there is a specific communication relationship between the applicant and the verifier (such as entity identification);
②The entity provides the source of data items to the verifier.
The identification methods are mainly based on the following five methods:
1||| Known, such as a secret password.
2||| Possessed, such as IC cards, tokens, etc.
3||| Characteristics that do not change, such as biological characteristics.
4||| Trust the authentication established by a reliable third party (recursion).
5||| Environment (such as host address, etc.).
Authentication information refers to the information generated, used and exchanged from the applicant's request for authentication to the end of the authentication process.
The types of authentication information are exchange authentication information, request authentication information and verify authentication information.
In some cases, the applicant needs to interact with a trusted third party in order to generate exchange authentication information. Similarly, in order to verify the exchange of authentication information, the verifier also needs to interact with a trusted third party. In this case, the trusted third party holds the verification AI of the relevant entity, and may also use the trusted third party to transfer and exchange authentication information. The entity may also need to hold the authentication information used in authenticating the trusted third party.
The authentication service is divided into the following stages:
1||| Installation phase
Define application authentication information and verification authentication information.
2||| Modify identification information stage
Entities or administrators apply for authentication information and verify authentication information changes (such as changing passwords).
3||| Distribution stage
To authenticate and exchange authentication information, distribute the authentication authentication information to various entities (such as applicants or verifiers). witness) for use.
4||| acquisition phase
The applicant or verifier can obtain the information needed to generate specific exchange authentication information for the authentication instance, The exchange of authentication information can be obtained by interacting with a trusted third party or exchanging information between authentication entities.
5||| transmission phase
Transmit and exchange authentication information between the applicant and the verifier.
6||| Verification phase
The exchange authentication information is checked against the verification authentication information.
7||| deactivation phase
A state will be established such that previously authenticated entities cannot be authenticated temporarily.
8||| reactivation phase
The state established during the deactivation phase will be terminated.
9||| Cancel the installation phase
The entity is removed from the entity collection.
3. access control framework
Access Control is the process of determining which resources are allowed to be used in an open system environment and where it is appropriate to prevent unauthorized access.
In the case of access control, access can be to a system (that is, to an entity that is a communicating part of a system) or to within a system.
Figure 4-25 and Figure 4-26 illustrate the basic functions of access control.
ACI (Access Control Information) is any information used for access control purposes, including contextual information. ADI (Access Control Decision Information) is part (or all) of the ACI available to ADF when making a specific access control decision.
ADF (Access Control Decision Function) is a specific function that makes access control decisions by using access control policy rules on the access request, the ADI, and the context of the access request. AEF (Access Control Enforcement Function) ensures that only access allowed to the target is performed by the initiator.
Involved in access control are the initiator, AEF, ADF and target. Initiators represent people and computer-based entities that access or attempt to access a target. The target represents the computer or communications-based entity that is attempted to be accessed or is accessed by the initiator. For example, the target may be an OSI entity, file, or system. An access request represents the operations and operands that form part of the access attempt.
When the initiator requests special access to the target, AEF notifies ADF that a decision is needed to make a decision. To make a decision, the ADF is provided with an access request (as part of the decision request) and the following access control decision information (ADI).
(1) Initiator ADI (ADI is derived from the ACI bound to the initiator);
(2) Target ADI (ADI is derived from the ACI bound to the target);
(3) Access request ADI (ADI is derived from the ACI bound to the access request).
Other inputs to ADF are access control policy rules (from the security domain authority of ADF) and the necessary contextual information used to interpret the ADI or policy. Contextual information includes the originator's location, access time, or special communication paths in use. Based on these inputs, and possibly ADI information retained from previous decisions, the ADF can make a decision that allows or disallows the initiator's attempted access to the target. The decision is passed to the AEF, which then allows the access request to be passed to the target or takes other appropriate action.
In many cases, successive access requests to the target by the initiator are related. A typical example in an application is that after opening a connection with the same layer target, the application process attempts to perform several accesses with the same (reserved) ADI. For some access requests that are subsequently communicated through the connection, it may be necessary to provide additional access requests to the ADF. ADI allows access requests. In other cases, security policies may require restrictions on certain related access requests between one or more initiators and one or more targets. In this case, ADF may use multiple initiators. Special access requests are adjudicated using ADI retained from previous decisions regarding the target.
If allowed by the AEF, the access request involves only a single interaction between the initiator and the target. Although some access requests between the initiator and target are completely unrelated to other access requests, often the two entities enter into a related set of access requests, such as the challenge-response pattern. In this case, the entity changes the initiator and target roles simultaneously or alternately as needed, and the access control function can be performed for each access request by separate AEF components, ADF components, and access control policies.
4. confidentiality framework
The purpose of confidentiality (Confidentiality) services is to ensure that information is only available to authorized persons. Because information is represented by data, and data may cause changes in relationships (for example, file operations may cause directory changes or changes in available storage areas), information can be derived from data in many different ways. For example, deriving by understanding the meaning of the data (such as the value of the data); deriving by using data-related attributes (such as existence, created data, data size, date of last update, etc.): by studying the context of the data, That is, derived through other data entities related to it; derived through observing the dynamic changes of data expressions.
The protection of information is to ensure that the data is restricted to authorized persons or obtained by representing the data in a specific way. The semantics of this protection method is that the data is only accessible to those who possess certain key information. Effective confidentiality protection requires that necessary control information (such as keys and RCI, etc.) be protected. This protection mechanism is different from the mechanism used to protect data (such as keys can be protected by physical means, etc.).
The two concepts of protected environment and overlapping protected environment are used in the confidentiality framework. Data in a protected environment can be protected through the use of a specific security mechanism (or mechanisms). All data in a protected environment is protected in a similar manner. When two or more environments overlap, the data in the overlap can be protected multiple times. It can be inferred that continuous protection of data moved from one environment to another necessarily involves overlapping protection environments.
The confidentiality of data can depend on the medium on which it resides and is transmitted, so the confidentiality of stored data is guaranteed by using mechanisms that hide data semantics (such as encryption) or shard the data. The confidentiality of data during transmission is ensured by mechanisms that prohibit access, hide data semantics, or disperse data (such as frequency hopping, etc.). These mechanism types can be used individually or in combination.
Mechanism type
(1) Provide confidentiality by denying access
(2) Provide confidentiality through encryption
Encryption mechanisms are divided into symmetric encryption mechanisms and Confidentiality mechanism based on asymmetric encryption.
5. integrity framework
The purpose of the integrity (Integrity) framework is to protect the integrity of data and the integrity of data-related attributes that may be compromised in different ways by preventing threats or detecting threats. Integrity refers to the characteristic that data is not altered or destroyed in unauthorized ways.
Integrity services are classified in several ways:
1||| According to the classification of violations to be prevented, the violation operations are divided into unauthorized data modification, unauthorized data creation, unauthorized data deletion, unauthorized data insertion and unauthorized data replay.
2||| The protection methods provided are divided into preventing integrity damage and detecting integrity damage.
3||| According to whether it supports the recovery mechanism, it is divided into those with recovery mechanism and those without recovery mechanism.
Since the ability to protect data is related to the media being used, data integrity protection mechanisms are different for different media and can be summarized as the following two situations.
1||| Mechanism to block access to media. Including physically isolated uninterrupted channels, routing control, and access control.
2||| A mechanism for detecting unauthorized modifications to data or sequences of data items. Unauthorized modifications include unauthorized data creation, data deletion, and data replay. Corresponding integrity mechanisms include sealing, digital signatures, data duplication (as a means to combat other types of breaches), digital fingerprints combined with cryptographic transformations, and message sequence numbers.
According to the intensity of protection, integrity mechanisms can be divided into:
1||| No protection;
2||| Detection of modifications and creations;
3||| Detection of modifications, creations, deletions and duplications;
4||| Detection of modifications and creations with recovery function;
5||| Detection and recovery of modifications, creations, deletions and duplications.
6. non-repudiation framework
Non-repudiation (Non-repudiation) services include the generation, verification and recording of evidence, as well as the subsequent recovery and re-verification of evidence when resolving disputes. The purpose of the non-repudiation services described in the Framework is to provide evidence about specific events or actions. Entities other than the event or conduct itself can request non-repudiation services. Examples of behaviors that can be protected by the non-repudiation service include sending X.400 messages, inserting records in the database, requesting remote operations, etc.
When it comes to non-repudiation services for message content, the identity of the data originator and data integrity must be confirmed in order to provide proof of origin. To provide proof of delivery, recipient identity and data integrity must be confirmed. In some cases, evidence involving contextual information (such as date, time, originator/recipient location, etc.) may also be required. The non-repudiation service provides the following facilities that can be used in the event of attempted denial: evidence generation, evidence recording, verification of generated evidence, recovery and reexamination of evidence. Disputes may be resolved directly between the disputing parties through an examination of the evidence, or they may have to be resolved through an arbitrator, who will evaluate and determine whether the conduct or event at issue occurred.
Non-repudiation consists of 4 independent stages, namely:
1||| Evidence generation
In this phase, the evidence generation requester requests the evidence generator to generate evidence for an event or action. The entity involved in an event or behavior is called an evidence entity, and its involvement relationship is established by evidence. Depending on the type of non-repudiation service, evidence can be generated by the evidence entity, together with the services of a trusted third party, or by a trusted third party alone.
2||| Evidence transmission, storage and recovery
At this stage, evidence is transferred between entities or retrieved from or transferred to memory.
3||| Evidence verification
At this stage, the evidence is verified by the evidence verifier at the request of the evidence user. The purpose of this stage is to convince the evidence user that the evidence provided is indeed sufficient in the event of a dispute. Trusted third-party services can also participate to provide information that verifies this evidence.
4||| solve the arguement
During the dispute resolution stage, the arbitrator has the responsibility to resolve the dispute between the parties.
V. Database system security design
i. Database integrity refers to the correctness and consistency of the data in the database. Database integrity is guaranteed by various integrity constraints, so it can be said that database integrity design is the design of database integrity constraints. Database integrity constraints can be implemented through a database management system (DBMS) or application program. The integrity constraints based on DBMS are stored in the database as part of the schema.
ii. Database Integrity Design Principles
1. Determine the system level and method of implementation based on the type of database integrity constraints, and consider the impact on system performance in advance. In general, static constraints should be included in the database schema as much as possible, while dynamic constraints are implemented by the application.
2. Entity integrity constraints and referential integrity constraints are the most important integrity constraints of relational databases, and they should be applied as much as possible without affecting the key performance of the system. It is worth spending a certain amount of time and space in exchange for the ease of use of the system.
3. Be careful when using the trigger function supported by current mainstream DBMS. On the one hand, the performance overhead of triggers is large; on the other hand, multi-level triggering of triggers is difficult to control and prone to errors. When it is absolutely necessary, it is best to use Before Type statement level trigger.
4. In the requirements analysis stage, a naming convention for integrity constraints must be formulated, and try to use meaningful combinations of English words, abbreviations, table names, column names, and underlines to make them easy to recognize and remember. If you use CASE tools, there are generally default rules, which can be modified and used on this basis.
5. Database integrity must be carefully tested according to business rules to eliminate conflicts between implicit integrity constraints and the impact on performance as early as possible.
6. There must be a dedicated database design team responsible for the analysis, design, testing, implementation and early maintenance of the database from beginning to end. Database designers are not only responsible for the design and implementation of database integrity constraints based on DBMS, but also responsible for reviewing the database integrity constraints implemented by application software.
7. Appropriate CASE tools should be used to reduce the workload at each stage of database design. A good CASE tool can support the entire database life cycle, which will greatly improve the work efficiency of database designers and make it easier to communicate with users.
iii. The role of database integrity
Database integrity constraints can prevent legitimate users from adding unsemantic data content to the database when using the database.
Using the integrity control mechanism based on DBMS to implement business rules is easy to define and understand, and can reduce the complexity of the application and improve the operating efficiency of the application. At the same time, because the integrity control mechanism of DBMS is centrally managed, it is easier to achieve database integrity than applications.
Reasonable database integrity design can take into account both database integrity and system performance. For example, when loading a large amount of data, as long as the database integrity constraints based on the DBMS are temporarily invalidated before loading and then taken into effect, the integrity of the database can be guaranteed without affecting the efficiency of data loading.
In functional testing of application software, improving database integrity helps to detect application software errors as early as possible.
Database integrity constraints can be divided into six categories: column-level static constraints, tuple-level static constraints, relationship-level static constraints, column-level dynamic constraints, tuple-level dynamic constraints, and relationship-level dynamic constraints. Dynamic constraints are usually implemented by application software. The database integrity supported by different DBMS is basically the same. The DBMS-based integrity constraints supported by a common relational database system are shown in Table 4-3.
iv. Database Integrity Design Example
A good database integrity design first needs to determine the business rules to be implemented through database integrity constraints during the requirements analysis phase. Then, based on a full understanding of the integrity control mechanism provided by a specific DBMS, based on the architecture and performance requirements of the entire system, and in compliance with database design methods and application software design methods, the implementation method of each business rule is reasonably selected. Finally, test carefully to eliminate implicit constraint conflicts and performance issues.
Database integrity design based on DBMS is generally divided into
(1) requirements analysis stage
(2) conceptual structural design stage
The conceptual structure design stage is to convert the results of the requirements analysis into a conceptual model that is independent of the specific DBMS, that is, the Entity-Relationship Diagram (ERD).
(3) Logical structure design stage
This stage is to convert the conceptual structure into a data model supported by a certain DBMS and optimize it, including the standardization of the relational model.
VI. Security architecture design case analysis
i. Take an industrial security architecture design based on hybrid cloud as an example.
ii. Hybrid cloud architecture is often embraced by large enterprises. Hybrid cloud combines public cloud and private cloud and is the main model and development direction of cloud computing in recent years.
iii. The architecture of a secure production management system for large enterprises using hybrid cloud technology
iv. When designing a safe production management system based on hybrid cloud, five aspects of security issues need to be considered.
(1) Device security
(2) cyber security
(3) Control security
(4) Application security
(5) Data Security
八、 Cloud native architecture
I. summary
"Cloud Native" Cloud Native means that its application software and services are in the cloud rather than in the traditional data center. Native represents application software that has been based on the cloud environment from the beginning and is specifically designed for the characteristics of the cloud. It can make full use of the elasticity and distributed advantages of the cloud environment and maximize the productivity of the cloud environment.
II. Development overview
i. The "waterfall process" development model, on the one hand, creates upstream and downstream information development Asymmetry, on the other hand, lengthens the development cycle and makes adjustment difficult.
ii. Agile development only solves the problem of software development efficiency and version update speed, but it has not yet solved the problem of operation and management. Maintenance and management can be effectively connected.
iii. DevOps can be seen as the intersection of development, technical operations and quality assurance, promoting communication, collaboration and integration between them, thereby improving the development cycle and efficiency.
iv. Cloud-native containers, microservices and other technologies provide good prerequisites for DevOps and ensure that IT software development realizes key applications of DevOps development and continuous delivery. In other words, being able to implement DevOps and continuous delivery has become an integral part of the value of cloud native technology.
v. The deep integration of cloud native and business scenarios not only injects new momentum for development and innovation into various industries, but also promotes the faster development of cloud native technology and a more mature ecology, which is mainly reflected in the following points.
1. From the perspective of the value it brings to organizations, the cloud native architecture meets the personalized computing power needs of different application scenarios by supporting multiple computing power, and based on the software and hardware collaborative architecture, it provides cloud native computing power with the ultimate performance for applications; Based on multi-cloud governance and edge-cloud collaboration, create an efficient and highly reliable distributed ubiquitous computing platform, and build unified computing resources in various forms including containers, bare metal, virtual machines, functions, etc.; build an efficient "application"-centered The resource scheduling and management platform provides enterprises with one-click deployment, application-aware intelligent scheduling, and comprehensive monitoring and operation and maintenance capabilities.
2. Through the latest DevSecOps application development model, agile development of applications is achieved, the iteration speed of business applications is improved, efficient response to user needs is achieved, and security of the entire process is ensured. For service integration, two modes, intrusive and non-intrusive, are provided to assist in the upgrade of enterprise application architecture, while achieving organic collaboration between new and old applications, so that they can be established without breaking.
3. Help enterprises manage data well, quickly build data operation capabilities, realize the asset accumulation and value mining of data, and use a series of AI technologies to empower enterprise applications again, combining the capabilities of data and AI to help enterprises achieve intelligent upgrades in their businesses.
4. Combined with the cloud platform's comprehensive organizational-level security services and security compliance capabilities, it ensures that organizational applications are safely built on the cloud and businesses run safely.
III. Architecture definition
i. From a technical perspective, cloud native architecture is a collection of architectural principles and design patterns based on cloud native technology. It aims to maximize the stripping of non-business code parts in cloud applications, allowing cloud facilities to take over the original code in the application. A large number of non-functional features (such as elasticity, resilience, security, observability, grayscale, etc.) make the business no longer troubled by non-functional business interruptions, while also being lightweight, agile, and highly automated.
ii. Cloud native technology partially relies on the 3-layer concept of traditional cloud computing, namely infrastructure as a service (laaS), platform as a service (PaaS) and software as a service (SaaS).
iii. Cloud native code usually consists of three parts:
1. Business code
Refers to the code that implements business logic
2. Third party software
It is all the third-party libraries that the business code depends on, including business libraries and Basic library
3. Code that handles non-functional features
Refers to code that implements non-functional capabilities such as high availability, security, and observability.
Only the business code is the core and brings real value to the business. The other two parts are only accessories.
iv. Huge changes in code structure
In a cloud environment, "how to obtain storage" becomes a number of services, including object storage services, block storage services, and file storage services. The cloud not only changes the interface for developers to obtain these storage capabilities, but also solves various challenges in distributed scenarios, including high availability challenges, automatic expansion and contraction challenges, security challenges, operation and maintenance upgrade challenges, etc., application Developers do not need to deal with the problem of how to synchronize locally saved content to the remote end before the node goes down in their code, nor do they need to deal with the problem of how to expand the storage node when the business peak arrives, and the operation and maintenance personnel of the application do not need to deal with the problem. When a "zeroday" security issue is discovered, the third-party storage software is urgently upgraded.
v. Non-functional features are heavily delegated
i. Any application provides two types of features:
1. Functional features
Code that truly brings value to the business, such as creating customer profiles, processing orders, payments, etc. Even some common business functional features, such as organization management, business dictionary management, search, etc., are closely aligned with business needs.
2. non-functional features
Features that do not bring direct business value to the business, but are usually essential, such as high availability, disaster recovery, security features, operability, ease of use, testability, grayscale release capabilities, etc.
ii. Cloud computing solutions
1. virtual machine
When the virtual machine detects an abnormality in the underlying hardware, it automatically helps the application to perform live migration. The migrated application does not need to be restarted but still has the ability to provide external services. The application itself and its users will not have any awareness of the entire migration process. .
2. container
The container detects abnormal process status through monitoring and inspection, thereby implementing operations such as taking the abnormal node offline, bringing new nodes online, and switching production traffic. The entire process is completed automatically without the intervention of operation and maintenance personnel.
3. cloud service
If the application hands over the "stateful" part to cloud services (such as cache, database, object storage, etc.), plus the miniaturization of global object holdings or the ability to quickly rebuild from disk, the cloud service itself is extremely powerful. With high availability capabilities, the application itself will become a thinner "stateless" application, and the business interruption caused by high availability failures will be reduced to a fraction. Bell level; if the application is a peer-to-peer architecture model of N:M (each of N clients can access M servers), then combined with load balancing products, strong high availability capabilities can be obtained.
vi. Highly automated software delivery
Containers package software in a standard way, and containers and related technologies help shield differences between different environments, thereby enabling standardized software delivery based on containers.
For automated delivery, a tool that can describe different environments is also needed, so that the software can "understand" the target environment, delivery content, configuration list, and identify the differences in the target environment through code, and "orient toward the end state" based on the delivery content. Complete the installation, configuration, operation and changes of the software.
IV. The basic principle
1. Servitization
When the size of the code exceeds the cooperation scope of a small team, it is necessary to perform service-oriented splitting, including splitting into microservice architecture, miniservice (MiniService) architecture, etc., and separating modules with different life cycles through the service-oriented architecture. Conduct business iterations separately to avoid frequent iteration modules from being slowed down by slow modules, thereby speeding up the overall progress and improving system stability. At the same time, the service-oriented architecture is based on interface-oriented programming, and the functions within the service are highly cohesive. The extraction of public function modules between modules increases the degree of software reuse.
Current limiting and downgrading, circuit breaker compartments, gray scale, back pressure, zero trust security, etc. in a distributed environment are essentially control strategies based on service traffic (rather than network traffic), so the cloud native architecture emphasizes the use of service-oriented The purpose is also to abstract the relationship between business modules from the architectural level and standardize the transmission of service traffic, thereby helping business modules carry out policy control and governance based on service traffic, regardless of the language in which these services are developed.
2. elasticity
Elasticity means that the deployment scale of the system can automatically expand and contract as business volume changes, without the need to prepare fixed hardware and software resources based on prior capacity planning. Good elasticity not only shortens the time from procurement to go online, but also allows organizations not to pay attention to the cost of additional software and hardware resources (including idle costs), and reduces the organization's IT costs. More importantly, when the business scale faces massive emergencies, When expanding, we no longer have to “say no” because of insufficient reserves of existing software and hardware resources, thus ensuring organizational profits.
3. observable
Observability is different from the capabilities provided by systems such as monitoring, business exploration, and application performance monitoring (Application Performance Monitor, APM). Observability is the active use of logs, link tracking, and metrics in distributed systems such as the cloud. This method makes the time consumption, return values and parameters of multiple service calls behind a single click clearly visible, and can even drill down to each third-party software call, SQL request, node topology, network response, etc. This ability can make the operation Maintenance, development and business personnel can grasp the running status of the software in real time, and combine data indicators from multiple dimensions to obtain correlation analysis capabilities to continuously digitally measure and continuously optimize business health and user experience.
4. toughness
Resilience represents the ability of software to withstand when various abnormalities occur in the software and hardware components on which the software depends. These abnormalities usually include hardware failures, hardware resource bottlenecks (such as CPU/network card bandwidth exhaustion), and business traffic exceeding the software design capabilities. , faults and disasters that affect the work of the computer room, software vulnerabilities (bugs), hacker attacks and other factors that have a fatal impact on business unavailability.
Resilience explains the software's ability to continue to provide business services from multiple dimensions. The core goal is to improve the mean time between failures (MTBF) of the software. In terms of architectural design, resilience includes service asynchronous capabilities, retry/current limiting/degradation/circuit breaker/back pressure, master-slave mode, cluster mode, high availability within AZ (Availability Zones), unitization, and cross-region (Regional) disaster recovery, remote multi-active disaster recovery, etc.
5. All process automation
On the one hand, the software delivery process within the organization is standardized, and on the other hand, automation is carried out on the basis of standardization. Through configuration data self-description and end-state-oriented delivery process, the automation tool understands the delivery goals and environmental differences, and realizes the entire software delivery and operation. Dimensional automation.
6. Zero trust
Zero Trust Security re-evaluates and examines the traditional boundary security architecture thinking, and gives new suggestions for security architecture ideas. The core idea is that no person/device/system inside or outside the network should be trusted by default. The trust basis of access control needs to be reconstructed based on authentication and authorization, such as IP address, host, geographical location, network, etc. It cannot be used as reliable evidence. Zero trust has subverted the paradigm of access control and guided the security architecture from "network centralization" to "identity centralization". Its essential appeal is identity-centered access control.
The first core issue of zero trust is identity, which gives different entities different identities to solve the problem of who accesses a specific resource under what environment. In microservice scenarios such as R&D, testing, and operation and maintenance, identity and related policies are not only the basis of security, but also the basis of many isolation mechanisms (including resources, services, environments, etc.); in scenarios where users access internal applications of the organization , identities and their related policies provide instant access services.
7. Architecture continues to evolve
The cloud native architecture itself must also be an architecture with the ability to continuously evolve, rather than a closed architecture. In addition to factors such as incremental iteration and target selection, it is also necessary to consider the architecture governance and risk control at the organizational (such as architecture control committee) level, especially the balanced relationship between architecture, business, and implementation in the case of high-speed business iteration. Cloud native architecture is relatively easy to choose the architectural control strategy for new applications (usually choosing the dimensions of elasticity, agility, and cost). However, for the migration of existing applications to cloud native architecture, the cost of migrating legacy applications needs to be considered architecturally. /Risk and migration cost/risk to the cloud, as well as technically fine-grained control of applications and traffic through microservices/application gateways, application integration, adapters, service mesh, data migration, online grayscale, etc.
V. Common architectural patterns
1. Service-oriented architecture
Service-oriented architecture is a standard architectural model for building cloud-native applications in the new era. It requires dividing a piece of software into application modules, defining mutual business relationships with interface contracts (such as IDL), and ensuring mutual trust with standard protocols (HTTP, gRPC, etc.) Interoperability, combined with Domain Driven Design (DDD), Test Driven Development (TDD), and containerized deployment, improve the code quality and iteration speed of each interface.
Typical patterns of service-oriented architecture are microservices and small service patterns, where small services can be seen as a combination of a group of very closely related services that share data. The small service model is usually suitable for very large software systems to avoid excessive call loss (especially inter-service calls and data consistency processing) and governance complexity caused by too fine granularity of the interface.
2. Mesh architecture
Mesh (grid) architecture is to separate the middleware framework (such as RPC, cache, asynchronous messages, etc.) from the business process, so that the middleware software development kit (Software Development Kit, SDK) is further decoupled from the business code. As a result, middleware upgrades have no impact on business processes, and even middleware migrated to another platform is transparent to the business.
After separation, only a very "thin" Client part is retained in the business process. The Client usually rarely changes and is only responsible for communicating with the Mesh process. The flow control, security and other logic that originally needed to be processed in the SDK are completed by the Mesh process.
After the Mesh architecture is implemented, a large number of distributed architecture modes (such as circuit breaker, current limiting, downgrade, retry, back pressure, isolation, etc.) are completed by the Mesh process, even if these third-party software packages are not used in the business code products. ; At the same time, obtain better security (such as zero-trust architecture capabilities, etc.), dynamic environment isolation based on traffic, smoke/regression testing based on traffic, etc.
3. Serverless
Serverless (serverless) "takes away" the action of "deployment" from operation and maintenance, so that developers do not need to care about application operation. Run location, operating system, network configuration, CPU performance, etc.
Serverless is not suitable for any type of application, so architectural decision makers need to care about whether the application type is suitable for Serverless computing. If the application is stateful, Serverless scheduling will not help the application with state synchronization, so the cloud may cause context loss when scheduling; if the application is an intensive computing task that runs in the background for a long time, the advantages of Serverless will not be utilized; If the application involves frequent external I/O (including network or storage, and calls between services, etc.), it is not suitable because of the heavy I/O burden and large latency. Serverless is very suitable for event-driven data computing tasks, request/response applications with short computing time, and long-cycle tasks without complex mutual calls.
4. Separation of storage and computing
The difficulty of CAP (Consistency: Availability: Partition tolerance) in a distributed environment is mainly for stateful applications, because stateless applications do not have the C (consistency) dimension, so they can obtain good results. of A (availability) and P (partition tolerance), thus achieving better resiliency. In a cloud environment, it is recommended to use cloud services to save all types of transient data (such as sessions), structured and unstructured persistent data, thereby achieving separation of storage and computing. However, there are still some states that if saved to the remote cache, will cause a significant decrease in transaction performance. For example, the transaction session data is too large and needs to be constantly reacquired based on the context. At this time, you can consider using time log snapshots (or checkpoints). This method enables rapid and incremental service restoration after restart and reduces the impact of unavailability on the business.
5. Distributed transactions
The traditional XA (eXtended Architecture) mode is used, which has strong consistency but poor performance.
Message-based eventual consistency generally has high performance, but limited generality.
The TCC (Try-Confirm-Cancel) mode completely controls transactions by the application layer, and the transaction isolation is controllable and can be relatively efficient; however, it is very intrusive to the business, and the cost of design, development and maintenance is very high.
The SAGA mode (referring to the fault management mode that allows the establishment of consistent distributed applications) has similar advantages and disadvantages to the TCC mode but does not have the Try phase. Instead, each forward transaction corresponds to a compensation transaction, which also makes development and maintenance costs high.
The AT mode of the open source project SEATA is very high-performance, has no code development workload, and can automatically perform rollback operations. It also has some usage scenario restrictions.
6. observable
The observable architecture includes three aspects: Logging, Tracing, and Metrics. Logging provides detailed information tracking at multiple levels (verbose/debug/warning/error/fatal), proactively provided by application developers; Tracing provides a request from the front end to the back end. Complete call link tracking is especially useful for distributed scenarios; Metrics provides multi-dimensional measurements of system quantification.
Architectural decision-makers need to select appropriate open source frameworks that support observability (such as Open Tracing, Open Telemetry, etc.), and standardize contextual observable data specifications (such as method names, user information, geographical location, request parameters, etc.), and plan these In which services and technical components the observable data is spread, use spanid/traceid in logs and tracing information to ensure that there is enough information for fast correlation analysis when performing distributed link analysis.
Since the main goal of establishing observability is to measure the service SLO (Service Level Objective) and thereby optimize the SLA (Service Level Agreement), the architecture design needs to define clear SLOs for each component, including concurrency, Time consumption, available time, capacity, etc.
7. event driven
Event Driven Architecture (EDA) is essentially an integrated architecture pattern between applications/components. Events are different from traditional messages. Events have Schema, so the validity of the Event can be verified. At the same time, EDA has a QoS guarantee mechanism and can also respond to event processing failures.
Event-driven architecture is not only used for (micro)service decoupling, but can also be applied to the following scenarios.
1||| Enhance service resilience
Since services are integrated asynchronously, any processing failure or even downtime in the downstream will not be perceived by the upstream, and will naturally not have an impact on the upstream.
2||| CQRS (Command Query Responsibility Segregation, command query responsibility separation)
Commands that have an impact on the service status are initiated using events, while queries that have no impact on the service status use the API interface that is called synchronously; combined with the Event Sourcing mechanism in EDA, it can be used to maintain the consistency of data changes, and when reconstruction is needed In the service state, just "play" the events in EDA again.
3||| Data change notification
Under the service architecture, often when data in one service changes, other services will be interested. For example, after a user order is completed, points services, credit services, etc. need to be notified of events and update user points and credit levels.
4||| Build open interfaces
Under EDA, the event provider does not need to care about the subscribers, unlike service calls - the data producer needs to know where the data consumer is and call it, thus maintaining the openness of the interface.
5||| event stream processing
Applied to data analysis scenarios of a large number of event streams (rather than discrete events), a typical application is log processing based on Kafka.
6||| Event-triggered responses
In the IoT era, data generated by a large number of sensors does not need to wait for the return of processing results like human-computer interaction. It is naturally suitable to use EDA to build data processing applications.
VI. Cloud native case
i. As one of the fastest-growing logistics organizations, an express delivery company has been actively exploring ways to empower business growth through technological innovation in order to reduce costs and improve efficiency. At present, the company's daily order processing volume has reached tens of millions, and its logistics track processing volume has reached hundreds of millions. The data generated every day has reached the TB level, and it uses 1,300 computing nodes to process business in real time. In the past, the company's core business applications were run in the IDC computer room. The original IDC system helped the company stably survive the early period of rapid business development. However, with the exponential growth of business volume, business forms have become increasingly diversified. The original system exposed many problems. The traditional IOE architecture, irregularities in each system architecture, stability, and R&D efficiency all limited the possibility of rapid business development. The software delivery cycle is too long, special resource requirements for large-scale promotions are difficult to achieve, and system stability is difficult to guarantee. Business problems such as this are gradually exposed. After multiple demand communications and technical verifications with a cloud service provider, the company finally determined the cloud-native technology and architecture to move its core business to the cloud.
ii. solution
1. Introducing cloud native database
By introducing OLTP and OLAP databases, online data and offline analysis logic are split into two databases, changing the previous status quo of relying entirely on Oracle databases. Meet the shortcomings of actual business requirements supported by Oracle database in the scenario of processing historical data query.
2. Application containerization
With the introduction of containerization technology, the problem of inconsistent environments has been effectively solved through application containerization, ensuring the consistency of applications in development, testing, and production environments. Compared with virtual machines, containerization provides dual improvements in efficiency and speed, making applications more suitable for microservice scenarios and effectively improving production and research efficiency.
3. Microservice transformation
Since many businesses in the past were completed based on Oracle's stored procedures and triggers, the service dependencies between systems also required the Oracle database OGG (Oracle Golden Gate) to be completed simultaneously. The problem this brings is that the system maintenance is difficult and the stability is poor. By introducing Kubernetes service discovery, we can build a microservice solution and split the business according to business domains, making the entire system easier to maintain.
iii. Structure established
Taking into account the actual business needs and technical characteristics of a certain express delivery company, the cloud architecture determined by the company is shown in Figure 4-3.
(1) infrastructure
All computing resources are taken from bare metal servers on a cloud service provider. Compared with general cloud servers (ECS), Kubermetes can achieve better performance and more reasonable resource utilization when paired with servers. In addition, cloud resources can be obtained on demand, which is extremely important for a company with short-term high-traffic business scenarios such as promotion activities. Compared with offline self-built computer rooms and standing machines, cloud resources are readily available for use. After the promotion event is over, cloud resources can be released after use, reducing management and procurement costs.
(2) Traffic access
Cloud service providers provide two sets of traffic access, one is for public network requests, and the other is for internal service calls. Domain name resolution uses cloud DNS and PrivateZone. Use the Ingress capability of Kubernetes to achieve unified domain name forwarding to save the number of public network SLBs and improve operation and maintenance management efficiency.
(3) platform layer
The cloud-native PaaS platform built on Kubernetes has obvious advantages, including:
1||| Open up the DevOps closed loop and unify testing, integration, pre-release and production environments;
2||| Natural resource isolation and high machine resource utilization;
3||| Traffic access enables refined management;
4||| Integrated logs, link diagnosis, and Metrics platform;
5||| Unify API Server interfaces and extensions to support multi-cloud and hybrid cloud deployment.
(4) application service layer
Each application creates a separate Namespace on Kubernetes, and resource isolation is achieved between applications. By defining the configuration YAML template of each application, when the application is deployed, directly edit the image version in it to quickly complete the version upgrade. When rollback is needed, the historical version of the image can be directly started locally to quickly roll back.
(5) Operation and maintenance management
The online Kubernetes cluster uses the cloud service provider's hosted version of the container service, eliminating the need to operate and maintain the Master node. You only need to formulate the online and offline processes for the Worker node. At the same time, business systems complete business log searches through Alibaba Cloud's PaaS platform, and submit expansion tasks according to business needs. The system automatically completes the expansion operation, reducing business risks caused by directly operating Kubernetes clusters.
iv. Application benefits
1. cost
Cloud products are all self-hosted in the cloud without operation and maintenance, effectively saving manual operation and maintenance costs and allowing enterprises to focus more on core business.
2. stability
Cloud products provide at least five 9 (99.999%) SLA services to ensure system stability, while the stability of self-built systems is much higher. In terms of data security, data on the cloud can be easily backed up off-site. The archive storage products under the cloud service provider's data storage system have the characteristics of high reliability, low cost, security, and unlimited storage, making enterprise data safer.
3. efficiency
With deep integration with cloud products, R&D personnel can complete one-stop R&D, operation and maintenance work. From business requirement establishment to pull branch development, to test environment function regression verification, and finally deployment to pre-release verification and online, the entire continuous integration process can be shortened to minutes. In terms of troubleshooting, R&D personnel directly select the application they are responsible for and quickly retrieve the program's exception logs through the integrated SLS log console to locate the problem, eliminating the need to log in to the machine to check the logs.
4. Empower business
Cloud service providers provide more than 300 types of cloud components, covering computing, AI, big data, and IoT and many other fields. R&D personnel can use it right out of the box, effectively saving the technical costs brought about by business innovation.