日撸代码300行:第66-68天(主动学习之 ALEC)

news2024/11/15 22:01:44

 代码来自闵老师”日撸 Java 三百行(61-70天)

日撸 Java 三百行(61-70天,决策树与集成学习)_闵帆的博客-CSDN博客

       本次代码的实现是基于高斯密度,ALEC算法原文是基于密度峰值,同样是基于密度聚类,稍微还是有一些差别。

基于聚类的主动学习的基本思想如下:
Step 1. 将对象按代表性递减排序;
Step 2. 假设当前数据块有 N个对象, 选择最具代表性的前\sqrt{N}个查询其标签 (类别).
Step 3. 如果这\sqrt{N}个标签具有相同类别, 就认为该块为纯的, 其它对象均分类为同一类别. 结束.
Step 4. 如果不纯,将当前块划分为两个子块, 分别 Goto Step 3.

package machinelearning.activelearning;

import java.io.FileReader;
import java.util.Arrays;

import weka.core.Instances;

public class Alec {
	
	/**
	 * The whole data set.
	 */
	Instances dataset;
	
	/**
	 * The maximal number of queries that can be provided.
	 */
	int maxNumQuery;
	
	/**
	 * The actual number of queries.
	 */
	int numQuery;
	
	/**
	 * The radius, also dc in the paper. It is employed for density computation.
	 */
	double radius;
	
	/**
	 * The densities of instances, also rho in the paper.
	 */
	double[] densities;
	
	/**
	 * Distance to master
	 */
	double[] distanceToMaster;
	
	/**
	 * Sorted indices, where the first element indicates the instance with the biggest density.
	 */
	int[] descendantDensities;
	
	/**
	 * Priority
	 */
	double[] priority;
	
	/**
	 * The maximal distance between any pair of points.
	 */
	double maximalDistance;
	
	/**
	 * Who is my master?
	 */
	int[] masters;
	
	/**
	 * Predicted labels.
	 */
	int[] predictedLabels;
	
	/**
	 * Instance status. 0 for unprocessed, 1 for queried, 2 for classified.
	 */
	int[] instanceStatusArray;
	
	/**
	 * The descendant indices to show the representativeness of instances in a descendant order.
	 */
	int[] descendantRepresentatives;
	
	/**
	 * Indicate the cluster of each instance. It is only used in clusterInTwo(int[]);
	 */
	int[] clusterIndices;
	
	/**
	 * Blocks with size no more than this threshold should not be split further.
	 */
	int smallBlockThreshold = 3;
	
	
	/**
	 * *********************************************************
	 * The constructor.
	 * @param paraFilename
	 * *********************************************************
	 */
	public Alec(String paraFilename) {
		// TODO Auto-generated constructor stub
		try {
			FileReader tempReader = new FileReader(paraFilename);
			dataset = new Instances(tempReader);
			dataset.setClassIndex(dataset.numAttributes() - 1);
			tempReader.close();
		} catch (Exception e) {
			// TODO: handle exception
			System.out.println(e);
			System.exit(0);
		}//of try
		
		computeMaximalDistance();
		clusterIndices = new int[dataset.numInstances()];
		
	}//of the constructor
	
	/**
	 * ***********************************************************
	 * * Merge sort in descendant order to obtain an index array. The original
	 * array is unchanged. The method should be tested further. <br>
	 * Examples: input [1.2, 2.3, 0.4, 0.5], output [1, 0, 3, 2]. <br>
	 * input [3.1, 5.2, 6.3, 2.1, 4.4], output [2, 1, 4, 0, 3].
	 * 
	 * @param paraArray  The original array
	 * @return   The sorted indices.
	 * ***********************************************************
	 */
	public static int[] mergeSortToIndices(double[] paraArray) {
		int tempLength = paraArray.length;
		int[][] resultMatrix = new int[2][tempLength];
		
		//Initialize(这里初始化第一组就够了,第二组的数据在排序的时候是从第一组复制的)
		int tempIndex = 0;
		for (int i = 0; i < tempLength; i++) {
			resultMatrix[tempIndex][i] = i;
		}//of for i
		
		// Merge
		int tempCurrentLength = 1;
		// The indices for current merged groups.
		int tempFirstStart, tempSecondStart, tempSecondEnd;
		
		while (tempCurrentLength < tempLength) {
			// Divide into a number of groups.
			// Here the boundary is adaptive to array length not equal to 2^k.
			//Math.ceil()的作用是上取整,返回的是总共分了多少个“两组”数据。每进行一次for循环,完成一轮归并,当while条件不满足时,已经完成排序。
			//每一次是进行两组数的归并,所以每一轮的组数是(tempLength + 0.0)/(tempCurrentLength * 2);每一组数据的长度是tempCurrentLength
			for (int i = 0; i < Math.ceil((tempLength + 0.0) / tempCurrentLength /2); i++) {
				// Boundaries of the group
				tempFirstStart = i * tempCurrentLength * 2;
				tempSecondStart = tempFirstStart + tempCurrentLength;
				tempSecondEnd = tempSecondStart + tempCurrentLength - 1;
				
				if (tempSecondEnd >= tempLength) {
					tempSecondEnd = tempLength - 1;
				}//of if
				
				// Merge this group
				int tempFirstIndex = tempFirstStart;
				int tempSecondIndex = tempSecondStart;
				int tempCurrentIndex = tempFirstStart;
				
				if (tempSecondStart >= tempLength) {
					for (int j = tempFirstIndex; j < tempLength; j++) {
						resultMatrix[(tempIndex + 1) % 2][tempCurrentIndex] = resultMatrix[tempIndex % 2][j];
						tempFirstIndex ++;
						tempCurrentIndex ++;
					}//of for j
					break;
				}//of if
				
				while ((tempFirstIndex <= tempSecondStart - 1) && (tempSecondIndex <= tempSecondEnd)) {
					if (paraArray[resultMatrix[tempIndex % 2][tempFirstIndex]] <= paraArray[resultMatrix[tempIndex % 2][tempSecondIndex]]) {
						resultMatrix[(tempIndex + 1) % 2][tempCurrentIndex] = resultMatrix[tempIndex % 2][tempSecondIndex];
						tempSecondIndex ++;
					}else {
						resultMatrix[(tempIndex + 1) % 2][tempCurrentIndex] = resultMatrix[tempIndex % 2][tempFirstIndex];
						tempFirstIndex ++;
					}//of if
					tempCurrentIndex ++;
				}//of while
				
				// Remaining part
				for (int j = tempFirstIndex; j < tempSecondStart; j++) {
					resultMatrix[(tempIndex + 1) % 2][tempCurrentIndex] = resultMatrix[tempIndex % 2][j];
					tempCurrentIndex++;
				}//of for j
				
				for (int j = tempSecondIndex; j <= tempSecondEnd; j++) {
					resultMatrix[(tempIndex + 1) % 2][tempCurrentIndex] = resultMatrix[tempIndex % 2][j];
					tempCurrentIndex++;
				}//of for j
			}//of for i
			
			tempCurrentLength *= 2;
			tempIndex ++;     //交替使用两个数组为当前组,另一个数组用于新一轮排序时复制索引号。
		}//of while
		
		return resultMatrix[tempIndex % 2];
	}//of mergeSortToIndices
	
	/**
	 * **********************************************************************
	 * The Euclidean distance between two instances. Other distance measures
	 * unsupported for simplicity.
	 * 
	 * @param paraI   The index of the first instance.
	 * @param paraJ   The index of the second instance.
	 * @return   The distance.
	 * **********************************************************************
	 */
	public double distance(int paraI, int paraJ) {
		double resultDistance = 0;
		double tempDifference;
		
		for (int i = 0; i < dataset.numAttributes() - 1; i++) {
			tempDifference = dataset.instance(paraI).value(i) - dataset.instance(paraJ).value(i);
			resultDistance += tempDifference * tempDifference;
		}//of for i
		
		resultDistance = Math.sqrt(resultDistance);
		
		return resultDistance;
	}//of distance
	
	/**
	 * *****************************************************************
	 * Compute the maximal distance. The result is stored in a member variable.
	 * *****************************************************************
	 */
	public void computeMaximalDistance() {
		maximalDistance = 0;
		double tempDistance;
		for (int i = 0; i < dataset.numInstances(); i++) {
			for (int j = 0; j < dataset.numInstances(); j++) {
				tempDistance = distance(i, j);
				if (maximalDistance < tempDistance) {
					maximalDistance = tempDistance;
				}//of if
			}//of for j
		}//of for i
		
		System.out.println("maximalDistance = " + maximalDistance);
	}//of computeMaximalDistance
	
	/**
	 * ****************************************************************
	 * Compute the densities using Gaussian kernel.
	 * 这里不同于原文的密度峰值聚类,这里用的高斯核函数计算实例的密度。密度峰值是设定一个距离阈值,在距离范围内的实例个数就是当前实例的密度。
	 * 
	 * @param paraBlock    The given block.
	 * ****************************************************************
	 */
	public void computeDensitiesGaussian() {
		System.out.println("radius = " + radius);
		densities = new double[dataset.numInstances()];
		double tempDistance;
		
		for (int i = 0; i < dataset.numInstances(); i++) {
			for (int j = 0; j < dataset.numInstances(); j++) {
				tempDistance = distance(i, j);
				densities[i] += Math.exp(-tempDistance * tempDistance /(radius * radius));
			}//of for j
		}//of for i
		
		System.out.println("The densities are " + Arrays.toString(densities) + "\r\n");
	}//of computeDensitiesGaussian
	
	/**
	 * ***************************************************************
	 * Compute distanceToMaster, the distance to its master.
	 * ***************************************************************
	 */
	public void computeDistanceToMaster() {
		distanceToMaster = new double[dataset.numInstances()];
		masters = new int[dataset.numInstances()];
		descendantDensities = new int[dataset.numInstances()];
		instanceStatusArray = new int[dataset.numInstances()];
		
		descendantDensities = mergeSortToIndices(densities);
		distanceToMaster[descendantDensities[0]] = maximalDistance;
		
		double tempDistance;
		for (int i = 1; i < dataset.numInstances(); i++) {
			// Initialize.
			distanceToMaster[descendantDensities[i]] = maximalDistance;
			//只有密度比自己大的才可能是自己的Master,所以排序在i以后实例不用计算。
			for (int j = 0; j <= i - 1; j++) {
				tempDistance = distance(descendantDensities[i], descendantDensities[j]);
				if (distanceToMaster[descendantDensities[i]] > tempDistance) {
					distanceToMaster[descendantDensities[i]] = tempDistance;
					masters[descendantDensities[i]] = descendantDensities[j];
				}//of if
			}//of for j
		}//of for i
		System.out.println("First compute, masters = " + Arrays.toString(masters));
		System.out.println("descendantDensities = " + Arrays.toString(descendantDensities));
	}//of computeDistanceToMaster
	
	/**
	 * ****************************************************************
	 * Compute priority. Element with higher priority is more likely to be
	 * selected as a cluster center. Now it is rho * distanceToMaster. It can
	 * also be rho^alpha * distanceToMaster.
	 * ****************************************************************
	 */
	public void computePriority() {
		priority = new double[dataset.numInstances()];
		for (int i = 0; i < dataset.numInstances(); i++) {
			priority[i] = densities[i] * distanceToMaster[i];
		}//of for i
	}//of computePriority
	
	/**
	 * *******************************************************************
	 * The block of a node should be same as its master. This recursive method is efficient.
	 * @param paraIndex  The index of the given node.
	 * @return  The cluster index of the current node.
	 * *******************************************************************
	 */
	public int coincideWithMaster(int paraIndex) {
		if (clusterIndices[paraIndex] == -1) {
			int tempMaster = masters[paraIndex];
			clusterIndices[paraIndex] = coincideWithMaster(tempMaster);
		}//of if
		
		return clusterIndices[paraIndex];
	}//of coincideWithMaster
	
	/**
	 * *********************************************************************
	 * Cluster a block in two. According to the master tree.
	 * 
	 * @param paraBlock  The given block.
	 * @return  The new blocks where the two most represent instances serve as the root.
	 * *********************************************************************
	 */
	public int[][] clusterInTwo(int[] paraBlock) {
		// Reinitialize. In fact, only instances in the given block is considered.
		Arrays.fill(clusterIndices, -1);
		
		// Initialize the cluster number of the two roots.
		//这里把数组paraBlock的前两个元素存储的序号,所对应的簇标签分别设置为0和1.
		//paraBlock的前两个序号对应的原始数据集的实例标签如果相同,这里有没有影响???
		for (int i = 0; i < 2; i++) {
			clusterIndices[paraBlock[i]] = i;
		}//of for i
		
		for (int i = 0; i < paraBlock.length; i++) {
			if (clusterIndices[paraBlock[i]] != -1) {
				continue;
			}//of if
			
			//i实例和自己的Master具有相同的类标签。
			clusterIndices[paraBlock[i]] = coincideWithMaster(masters[paraBlock[i]]);
		}//of for i
		
		//The sub blocks.
		int[][] resultBlock = new int[2][];
		int tempFirstBlockCount = 0;
		//长度是clusterIndices.length,此时没在paraBlock中的实例,上面代码已经填充了-1.
		//只有paraBlock[i]才有数据。如果这里长度设置为paraBlock.length,下面判断条件应该是clusterIndices[paraBlock[i]] == 0。
		//否则i无法将所有实例全部遍历。
		for (int i = 0; i <clusterIndices.length; i++) {
			if (clusterIndices[i] == 0) {
				tempFirstBlockCount++;
			}//of if
		}//of for i
		resultBlock[0] = new int[tempFirstBlockCount];
		resultBlock[1] = new int[paraBlock.length - tempFirstBlockCount];
		
		// Copy. You can design shorter code when the number of clusters is greater than 2.
		int tempFirstIndex = 0;
		int tempSecondIndex = 0;
		for (int i = 0; i < paraBlock.length; i++) {
			if (clusterIndices[paraBlock[i]] == 0) {
				resultBlock[0][tempFirstIndex] = paraBlock[i];
				tempFirstIndex ++;
			} else {
				resultBlock[1][tempSecondIndex] = paraBlock[i];
				tempSecondIndex ++;
			}//of if
		}//of for i
		
		System.out.println("Split (" + paraBlock.length + ") instances "
				+ Arrays.toString(paraBlock) + "\r\nto (" + resultBlock[0].length + ") instances "
				+ Arrays.toString(resultBlock[0]) + "\r\nand (" + resultBlock[1].length
				+ ") instances " + Arrays.toString(resultBlock[1]));
		
		return resultBlock;
	}//of clusterInTwo
	
	/**
	 * **************************************************************
	 * Classify instances in the block by simple voting.
	 * 
	 * @param paraBlock   The given block.
	 * **************************************************************
	 */
	public void vote(int[] paraBlock) {
		int[] tempClassCounts = new int[dataset.numClasses()];
		
		for (int i = 0; i < paraBlock.length; i++) {
			if (instanceStatusArray[paraBlock[i]] == 1) {
				//"1"代表可查询标签的实例。
				tempClassCounts[(int)dataset.instance(paraBlock[i]).classValue()]++;
			}//of if
		}//of for i
		
		int tempMaxClass = -1;
		int tempMaxCount = -1;
		
		for (int i = 0; i < tempClassCounts.length; i++) {
			if (tempMaxCount < tempClassCounts[i]) {
				tempMaxCount = tempClassCounts[i];
				tempMaxClass = i;
			}//of if
		}//of for i
		
		// Classify unprocessed instances.
		for (int i = 0; i < paraBlock.length; i++) {
			if (instanceStatusArray[paraBlock[i]] == 0) {
				predictedLabels[paraBlock[i]] = tempMaxClass;
				instanceStatusArray[paraBlock[i]] = 2;
			}//of if
		}//of for i
	}//of vote
	
	/**
	 * *****************************************************************************************************
	 * Cluster based active learning. Prepare for
	 * 
	 * @param paraRatio   The ratio of the maximal distance as the dc.
	 * @param paraMaxNumQuery    The maximal number of queries for the whole dataset.
	 * @param paraSmallBlockThreshold    The small block threshold.
	 * *****************************************************************************************************
	 */
	public void clusterBasedActiveLearning(double paraRatio, int paraMaxNumQuery, int paraSmallBlockThreshold) {
		radius = maximalDistance * paraRatio;
		smallBlockThreshold = paraSmallBlockThreshold;
		
		maxNumQuery = paraMaxNumQuery;
		predictedLabels = new int[dataset.numInstances()];
		
		for (int i = 0; i < dataset.numInstances(); i++) {
			predictedLabels[i] = -1;
		}//of for i
		
		computeDensitiesGaussian();
		computeDistanceToMaster();
		computePriority();
		descendantRepresentatives = mergeSortToIndices(priority);
		System.out.println("descendantRepresentatives = " + Arrays.toString(descendantRepresentatives));
		
		numQuery = 0;
		
		clusterBasedActiveLearning(descendantRepresentatives);
	}//of clusterBasedActiveLearning
	
	/**
	 * *******************************************************************************************************************
	 * Cluster based active learning.
	 * 
	 * @param paraBlock  The given block. This block must be sorted according to the priority in descendant order.
	 * *******************************************************************************************************************
	 */
	public void clusterBasedActiveLearning(int[] paraBlock) {
		System.out.println("clusterBasedActiveLearning for block " + Arrays.toString(paraBlock));
		
		// Step 1. How many labels are queried for this block.
		int tempExpectedQueries = (int)Math.sqrt(paraBlock.length);
		int tempNumQuery = 0;
		
		for (int i = 0; i < paraBlock.length; i++) {
			if (instanceStatusArray[paraBlock[i]] == 1) {
				tempNumQuery ++;
			}//of if
		}//of for i
		
		// Step 2. Vote for small blocks.
		if ((tempNumQuery >= tempExpectedQueries) && (paraBlock.length <= smallBlockThreshold)) {
			System.out.println("" + tempNumQuery + " instances are queried, vote for block: \r\n"
					+ Arrays.toString(paraBlock));
			vote(paraBlock);
			return;
		}//of if
		
		// Step 3. Query enough labels.
		for (int i = 0; i < tempExpectedQueries; i++) {
			if (numQuery >= maxNumQuery) {
				System.out.println("No more queries are provided, numQuery = " + numQuery + ".");
				vote(paraBlock);
				return;
			}//of if
			
			if (instanceStatusArray[paraBlock[i]] == 0) {
				instanceStatusArray[paraBlock[i]] = 1;
				predictedLabels[paraBlock[i]] = (int)dataset.instance(paraBlock[i]).classValue();
				numQuery ++;
			}//of if
		}//of for i
		
		//Step 4. Pure?
		int tempFirstLabel = predictedLabels[paraBlock[0]];
		boolean tempPure = true;
		for (int i = 1; i < tempExpectedQueries; i++) {
			if (predictedLabels[paraBlock[i]] != tempFirstLabel) {
				tempPure = false;
				break;
			}//of if
		}//of for i
		
		if (tempPure) {
			System.out.println("Classify for pure block: " + Arrays.toString(paraBlock));
			for (int i = tempExpectedQueries; i < paraBlock.length; i++) {
				if (instanceStatusArray[paraBlock[i]] == 0) {
					instanceStatusArray[paraBlock[i]] = 2;
					predictedLabels[paraBlock[i]] = tempFirstLabel;
				}//of if
				
			}//of for i
			return;
		}//of if
		
		// Step 5. Split in two and process them independently.
		int[][] tempBlocks = clusterInTwo(paraBlock);
		for (int i = 0; i < 2; i++) {
			clusterBasedActiveLearning(tempBlocks[i]);
		}//of for i
	}//of clusterBasedActiveLearning
	
	/**
	 ****************************************************** 
	 * Show the statistics information.
	 ******************************************************
	 */
	public String toString() {
		int[] tempStatusCounts = new int[3];
		double tempCorrect = 0;
		
		for (int i = 0; i < dataset.numInstances(); i++) {
			tempStatusCounts[instanceStatusArray[i]]++;
			if (predictedLabels[i] == (int) dataset.instance(i).classValue()) {
				tempCorrect ++;
			}//of if
		}//of for i
		
		String resultString = "(unhandled, queried, classified) = " + Arrays.toString(tempStatusCounts);
		
		resultString += "\r\nCorrect = " + tempCorrect + ", accuracy = " + (tempCorrect / dataset.numInstances());
		
		return resultString;
	}//ofr toString
	
	/**
	 * ********************************************************************
	 * The entrance of the program.
	 * 
	 * @param args
	 * ********************************************************************
	 */
	public static void main(String args[]) {
		long tempStart = System.currentTimeMillis();
		
		System.out.println("Starting ALEC.");
		String arffFileName = "E:/Datasets/UCIdatasets/其他数据集/iris.arff";
		Alec tempAlec = new Alec(arffFileName);
		
		// The settings for iris
		tempAlec.clusterBasedActiveLearning(0.15, 30, 3);
		System.out.println(tempAlec);
		
		long tempEnd = System.currentTimeMillis();
		System.out.println("Runtime: " + (tempEnd - tempStart) + "ms.");
	}//of main
	
}//of Alec

部分对代码的理解已经标注在备注里。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/908827.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

HTTPS 握手过程

HTTPS 握手过程 HTTP 通信的缺点 通信使用明文&#xff0c;内容可能被窃听(重要密码泄露)不验证通信方身份&#xff0c;有可能遭遇伪装(跨站点请求伪造)无法证明报文的完整性&#xff0c;有可能已遭篡改(运营商劫持) HTTPS 握手过程 客户端发起 HTTPS 请求 用户在浏览器里…

Spring之ioc容器

目录 1.简介 2.为什么学习Spring 3.Spring的核心特性 4.Spring ioc 容器的特点 5.Spring的注入方式 6.web整合容器 1.简介&#xff1a; Spring简介 Spring是一个开源框架&#xff0c;它由Rod Johnson创建。它是为了解决企业应用开发的复杂性而创建的。 Spring使用基本的J…

听GPT 讲Alertmanager源代码--notify

api/&#xff1a;这个目录包含了Alertmanager的API实现&#xff0c;包括v1和v2版本的API。 assets/&#xff1a;这个目录包含了静态资源文件&#xff0c;如HTML、JavaScript和CSS文件&#xff0c;它们用于构建Alertmanager的Web UI。 cmd/&#xff1a;这个目录包含了Alertmanag…

企业网三层架构实验

一、实验拓扑 二、实验要求 1、内网IP地址172.16.0.0/16合理分配&#xff1b; 2、SW1/2之间互为备份&#xff1b; 3、VRRP/STP/VLAN/TRUNK均使用&#xff1b; 4、所有PC通过DHCP获取IP地址&#xff1b; 三、实验思路 1、配置ISP的IP地址&#xff1b; 2、配置R1的IP地址&…

800V高压电驱动系统架构分析

需要电驱竞品样件请联&#xff1a;shbinzer &#xff08;拆车邦&#xff09; 过去一年是新能源汽车市场爆发的一年&#xff0c;据中汽协数据&#xff0c;2021年新能源汽车销售352万辆&#xff0c;同比大幅增长157.5%。新能源汽车技术发展迅速&#xff0c;畅销车辆在动力性能…

张驰课堂:揭秘学习6sigma黑带培训重要价值

随着全球经济一体化的加速推进&#xff0c;企业间的竞争日趋激烈&#xff0c;越来越多的企业意识到质量是企业的生命线&#xff0c;而提高质量的关键在于持续改进。在这个过程中&#xff0c;6sigma黑带作为具有专业技能和领导能力的人才&#xff0c;成为企业实现突破性改进、降…

PID输出PWM温度控制(详细介绍PID输出和PWM组合的各种方法)

这篇博客主要介绍PID的输出如何和PWM输出进行绑定,PID控制算法和源代码大家自行查看PID专栏,这里不再赘述。常用链接如下: 位置式PID(S7-200SMART 单自由度、双自由度梯形图源代码)_RXXW_Dor的博客-CSDN博客有关位置型PID和增量型PID的更多详细介绍请参看PID专栏的相关文章…

Wappalyzer - 技术剖析工具的必备浏览器扩展

目录 前言一、Wappalyzer简介1.Wappalyzer的背景和由来2.Wappalyzer的目标和优势 二、Wappalyzer的工作原理1.检测技术栈的方法和策略2.数据库和规则集的更新 三、如何使用Wappalyzer1.安装Wappalyzer浏览器扩展2.在浏览器中使用Wappalyzer进行技术剖析 总结 前言 在当今的数字…

云曦暑期学习第六周——kali

1.熟悉网络配置 一般来说虚拟机有三种网络模式&#xff1a; NAT (网络地址转换模式)Bridged (桥接模式)Host-Only&#xff08;主机模式&#xff09; nat模式&#xff1a; 虚拟系统会通过宿主机的网络来访问外网。而这里的宿主机相当于有两个网卡&#xff0c;一个是真实网卡…

redis--------哨兵模式

1.哨兵模式 试想一下&#xff0c;如果主从模式中&#xff0c;大半夜主节点挂了&#xff0c;运维从床上迷迷糊糊爬起来&#xff0c;打开电脑&#xff0c;手动升级处理&#xff0c;怕不是第二天就要上头条了。 哨兵模式的出现用于解决主从模式中无法自动升级主节点的问题&#xf…

javeee eclipse项目导入idea中

步骤一 复制项目到idea工作空间 步骤二 在idea中导入项目 步骤三 配置classes目录 步骤四 配置lib目录 步骤五 添加tomcat依赖 步骤六 添加artifacts 步骤七 部署到tomcat

软件开发bug问题跟踪与管理

一、Redmine 项目管理和缺陷跟踪工具 官网&#xff1a;https://www.redmine.org/ Redmine 是一个开源的、基于 Web 的项目管理和缺陷跟踪工具。它用日历和甘特图辅助项目及进度可视化显示&#xff0c;同时它又支持多项目管理。Redmine 是一个自由开源软件解决方案&#xff0c;…

【新版】系统架构设计师 - 系统测试与维护

个人总结&#xff0c;仅供参考&#xff0c;欢迎加好友一起讨论 文章目录 架构 - 系统测试与维护考点摘要软件测试软件测试 - 测试类型软件测试 - 静态测试软件测试 - 动态测试软件测试 - 测试阶段软件测试 - 测试阶段 - 单元测试软件测试 - 测试阶段 - 集成测试软件测试 - 测试…

我的创作纪念日(C++修仙练气期总结)

分享自己最喜欢的一首歌&#xff1a;空想フォレスト—伊東歌詞太郎 机缘 现在想想自己在CSDN创作的原因&#xff0c;一开始其实就是想着拿着博客当做自己的学习笔记&#xff0c;笔记嘛&#xff0c;随便写写&#xff0c;自己看得懂就ok了的态度凸(艹皿艹 )。也是用来作为自己学习…

2023年大数据与区块链国际会议 | EI、Scoups检索

会议简介 Brief Introduction 2023年大数据与区块链国际会议&#xff08;ICBDB 2023&#xff09; 会议时间&#xff1a;2023年11月17 -19日 召开地点&#xff1a;中国西安 大会官网&#xff1a;www.icobdb.org 2023年大数据与区块链国际会议&#xff08;ICBDB 2023&#xff09;…

基于swing的超市管理系统java仓库库存进销存jsp源代码mysql

本项目为前几天收费帮学妹做的一个项目&#xff0c;Java EE JSP项目&#xff0c;在工作环境中基本使用不到&#xff0c;但是很多学校把这个当作编程入门的项目来做&#xff0c;故分享出本项目供初学者参考。 一、项目描述 基于swing的超市管理系统 系统有3权限&#xff1a;管…

Nginx代理转发地址不正确问题

使用ngix前缀去代理转发一个地址&#xff0c;貌似成功了&#xff0c;但是进不到正确的页面&#xff0c;能够访问&#xff0c;但是一直404远处出来nginx会自动拼接地址在后面 后面才知道要将这段代码加上去&#xff0c;去除前缀转发

指针(一)【C语言进阶版】

大家好&#xff0c;我是深鱼~ 【前言】&#xff1a; 指针的主题&#xff0c;在初阶指针章节已经接触过了&#xff0c;我们知道了指针的概念&#xff1a; 1.指针就是个变量&#xff0c;用来存放地址&#xff0c;地址的唯一标识一块内存空间&#xff08;指针变量&#xff09;&a…

8月16日上课内容 第二章 部署LVS-DR群集

本章结构&#xff1a; 数据包流向分析: 数据包流向分析&#xff1a; &#xff08;1&#xff09;客户端发送请求到 Director Server&#xff08;负载均衡器&#xff09;&#xff0c;请求的数据报文&#xff08;源 IP 是 CIP,目标 IP 是 VIP&#xff09;到达内核空间。 &#xf…