ssh 无密码登陆设置
最后更新于:2022-04-01 09:51:38
假如你Linux Client是客户端, Server为服务器,用户名为user。现在要配置从Client到Server的无密码SSH登录。
1:在Client上产生一对密钥,执行ssh-keygen命令,需要输入的地方直接回车,接受缺省值即可,输出如下:
~~~
[user@Client .ssh]$ ssh-keygen -d
Generating public/private dsa key pair.
~~~
这时候,在/home/user/.ssh目录下,存有一对密钥id_dsa和id_dsa.pub。
2:当公钥id_dsa.pub以任何方式上传到Server上,如:
~~~
[user@Client .ssh]$ scp id_dsa.pub Server:/home/user
~~~
3:登录到Server上,执行以下命令
~~~
cat id_dsa.pub >> /home/user/.ssh/authorized_keys
~~~
这样就完成了配置,此时从Client登录Server,就无需输入密码了。这种方式是安全的,你完全不用担心别人从其它机器上也可以无密码登录Server。这个基本原理是这样:
你在client上产生的id_dsa和id_dsa.pub是一对密钥,只有用私钥id_dsa才能解开公钥id_dsa.pub。现在你把公钥存在服务器上,你登录服务器的时候,服务器会给提供经过公钥id_dsa.pub(其内容存在authorized_keys里)加密的数据让你解密,你的机器上用id_dsa这把私钥,去解密,解开之后Server放行。而别人的机器上没有id_dsa这个私钥,自然无法解密,从而无法无密码登录了。
当然,你要保证你机器上的私钥的安全。否则,如果别人取得了你的私钥,就好比别人取得了你房间的钥匙,然后就可以拿去开你家房门了
MPI 使用小结
最后更新于:2022-04-01 09:51:36
MPI是分布式计算的基础接口架构,他有很多实现,比如intelMPI openMPI等等,而这些具体实现了这些接口里面的内容,比如一些通信协议。
MPI有几个很重要的概念rank, group, communicator, type, pack, spawn, window, 理解了这些概念MPI就算入门了。
**group**是MPI一个很重要的概念,一台电脑可以属于多个group,group的正真强大体现在可以随时随地的组合任意group,然后利用gourp内,和group间的communicator,可以很容易实现复杂科学计算的中间过程,比如奇数rank一个group,偶数另一个group,或者拓扑结构的group,这样可以解决很多复杂问题,另外MPI还有一个默认的全局的group,他就是comm world,一般简单的应用有了这一个group已经足够了。
**rank**就是任意group内的一个计算单元,利用rank我们可以很轻松的实现client server的架构,比如rank=0是server其他就是client。
**communicator**就是各种通信,比如一对一,一对多,多对一,其中多往往代表着一个group, 在传输过程中tag还是很有用的可以用来区别不同的任务类型,一般都是先解析tag,然后再解析具体的数据内容, 这里要有一个信封和信内容的差别的概念,理解了这样的差别,可以很好的扩展程序。
**type**是MPI的自定义类型,由于通常编程的时候常用struct 数组 和离散的变量,这些东西不能直接进行通信, 然后MPI同样有一套这样的定义,我们可以转化成MPI的格式,这样就可以很自由的通信了。
**Pack**,就是把离散的数据打包起来,方便传送,其实这个作用和type很类似,如果你不想很麻烦的定义type直接打包发送。
**spawn**是区分MPI一代和二代的一个重要的标志,有了spawn,就可以在运行过程中自动的改变process的数量,可能复杂的软件才有这样的需求。
**window**远程的控制同一个文件,只有在网络条件很好的时候用这个才有意义,否则会让软件效率变得很糟糕。、
最后要有一个思想就是同一份代码可能会被很多电脑同时执行到,注意区分个个部分代码的角色。
MPI 处理文件
最后更新于:2022-04-01 09:51:33
http://www.mcs.anl.gov/research/projects/mpi/usingmpi2/examples/starting/main.htm
下面只是贴了一个基于MPI-1的静态版本的文件的broadcast, 文章中还提供了基于MPI-2的动态版本。
另外还有很全面的MPI的文件处理样式,比如多个node处理同一文件之类的。
master部分的代码
~~~
* pcp from SUT, in MPI */
#include "mpi.h"
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define BUFSIZE 256*1024
#define CMDSIZE 80
int main( int argc, char *argv[] )
{
int mystatus, allstatus, done, numread;
char outfilename[128], controlmsg[80];
int infd, outfd;
char buf[BUFSIZE];
MPI_Init( &argc, &argv );
makehostlist( argv[1], "targets" );
strcpy( outfilename, argv[3] );
if ( (infd = open( argv[2], O_RDONLY ) ) == -1 ) {
fprintf( stderr, "input file %s does not exist\n", argv[2] );
sprintf( controlmsg, "exit" );
MPI_Bcast( controlmsg, CMDSIZE, MPI_CHAR, 0, MPI_COMM_WORLD );
MPI_Finalize();
return( -1 );
}
else {
sprintf( controlmsg, "ready" );
MPI_Bcast( controlmsg, CMDSIZE, MPI_CHAR, 0, MPI_COMM_WORLD );
}
sprintf( controlmsg, outfilename );
MPI_Bcast( controlmsg, CMDSIZE, MPI_CHAR, 0, MPI_COMM_WORLD );
if ( (outfd = open( outfilename, O_CREAT|O_WRONLY|O_TRUNC, S_IRWXU ) ) == -1 )
mystatus = -1;
else
mystatus = 0;
MPI_Allreduce( &mystatus, &allstatus, 1, MPI_INT, MPI_MIN,
MPI_COMM_WORLD );
if ( allstatus == -1 ) {
fprintf( stderr, "output file %s could not be opened\n", outfilename );
MPI_Finalize();
return( -1 );
}
/* at this point all files have been successfully opened */
done = 0;
while ( !done ) {
numread = read( infd, buf, BUFSIZE );
MPI_Bcast( &numread, 1, MPI_INT, 0, MPI_COMM_WORLD );
if ( numread > 0 ) {
MPI_Bcast( buf, numread, MPI_BYTE, 0, MPI_COMM_WORLD );
write( outfd, buf, numread );
}
else {
close( outfd );
done = 1;
}
}
MPI_Finalize();
}
int makehostlist( char spec[80], char filename[80] )
{
}
~~~
slaver部分的代码
~~~
/* pcp from the Scalable Unix Tools, in MPI */
#include "mpi.h"
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define BUFSIZE 256*1024
#define CMDSIZE 80
int main( int argc, char *argv[] )
{
int mystatus, allstatus, done, numread;
char outfilename[128], controlmsg[80];
int outfd;
char buf[BUFSIZE];
MPI_Init( &argc, &argv );
MPI_Bcast( controlmsg, CMDSIZE, MPI_CHAR, 0, MPI_COMM_WORLD );
if ( strcmp( controlmsg, "exit" ) == 0 ) {
MPI_Finalize();
return -1;
}
MPI_Bcast( controlmsg, CMDSIZE, MPI_CHAR, 0, MPI_COMM_WORLD );
if ( (outfd = open( controlmsg, O_CREAT|O_WRONLY|O_TRUNC, S_IRWXU ) ) == -1 )
mystatus = -1;
else
mystatus = 0;
MPI_Allreduce( &mystatus, &allstatus, 1, MPI_INT, MPI_MIN,
MPI_COMM_WORLD );
if ( allstatus == -1 ) {
MPI_Finalize();
return( -1 );
}
/* at this point all files have been successfully opened */
done = 0;
while ( !done ) {
MPI_Bcast( &numread, 1, MPI_INT, 0, MPI_COMM_WORLD );
if ( numread > 0 ) {
MPI_Bcast( buf, numread, MPI_BYTE, 0, MPI_COMM_WORLD );
write( outfd, buf, numread );
}
else {
close( outfd );
done = 1;
}
}
MPI_Finalize();
}
~~~
MPI Collective communication
最后更新于:2022-04-01 09:51:31
Collective communication means all processes within a communicatorcall the same routine. Portable applications should assume thatcollective routines include a global synchronization.
The following simple code fragment employs four basic collectiveroutines to manipulate a statically partitioned regular domain(one-dimensional in this case). The full domain length is broadcastfrom a root process to all others. The initial dataset is distributed(scattered) among the processes. After each compute step, a globalmaximum is determined for use by the root. The root then gathersthe final dataset.
~~~
#include <mpi.h>
{
int i;
int myrank;
int size;
int root;
int full_domain_length;
int sub_domain_length;
double global_max;
double local_max;
double *full_domain;
double *sub_domain;
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
root = 0;
/*
* Root obtains full domain and broadcasts its length.
*/
if (myrank == root) {
get_full_domain(&full_domain, &full_domain_length);
}
MPI_Bcast(&full_domain_length, 1, MPI_INT, root, MPI_COMM_WORLD);
/*
* Allocate subdomain memory.
* Scatter the initial dataset among the processes.
*/
sub_domain_length = full_domain_length / size;
sub_domain = (double *) malloc(sub_domain_length * sizeof(double));
MPI_Scatter(full_domain, sub_domain_length, MPI_DOUBLE,
sub_domain, sub_domain_length, MPI_DOUBLE,
root, MPI_COMM_WORLD);
/*
* Loop computing and determining max values.
*/
for (i = 0; i < NSTEPS; ++i) {
compute(sub_domain, sub_domain_length, &local_max);
MPI_Reduce(&local_max, &global_max, 1, MPI_DOUBLE,
MPI_MAX, root, MPI_COMM_WORLD);
}
/*
* Gather final dataset.
*/
MPI_Gather(sub_domain, sub_domain_length, MPI_DOUBLE,
full_domain, sub_domain_length, MPI_DOUBLE,
root, MPI_COMM_WORLD);
}
~~~
### Broadcast
~~~
MPI_Bcast(void *buffer, int count, MPI_Datatype datatype,
int root, MPI_Comm comm);
~~~
All processes use the same count, datatype, root, and communicator.Before the operation, the root buffer contains a message. After theoperation, all buffers contain the message from the root process.
### Scatter
~~~
MPI_Scatter(void *sndbuf, int sndcnt, MPI_Datatype sndtype,
void *rcvbuf, int rcvcnt, MPI_Datatype rcvtype,
int root, MPI_Comm comm);
~~~
All processes use the same send and receive counts, datatypes, rootand communicator. Before the operation, the root send buffer containsa message of length `sndcnt * N', where N is the number of processes.After the operation, the message is divided equally and dispersed toall processes (including the root) following rank order.
### Reduce
~~~
MPI_Reduce(void *sndbuf, void *rcvbuf, int count,
MPI_Datatype datatype, MPI_Op op,
int root, MPI_Comm comm);
~~~
All processes use the same count, datatype, reduction operation, rootand communicator. After the operation, the root process has in itsreceive buffer the result of the pair-wise reduction of the sendbuffers of all processes, including its own. MPI predefines reductionoperations, including: MPI_MAX, MPI_MIN, MPI_SUM, MPI_PROD, MPI_LAND,MPI_BAND, MPI_LOR, MPI_BOR, MPI_LXOR, MPI_BXOR.
### Gather
~~~
MPI_Gather(void *sndbuf, int sndcnt, MPI_Datatype sndtype,
void *rcvbuf, int rcvcnt, MPI_Datatype rcvtype,
int root, MPI_Comm comm);
~~~
All processes use the same send and receive counts, datatypes, root andcommunicator. This routine is the reverse of MPI_Scatter(): after theoperation the root process has in its receive buffer the concatenationof the send buffers of all processes (including its own), with a totalmessage length of `rcvcnt * N', where N is the number of processes.The message is gathered following rank order.
MPI 的基本数据结构
最后更新于:2022-04-01 09:51:29
http://www.lam-mpi.org/tutorials/one-step/datatypes.php
Heterogeneous computing requires that the data constituting a messagebe typed or described somehow so that its machine representation can beconverted between computer architectures. MPI can thoroughly describemessage datatypes, from the simple primitive machine types to complexstructures, arrays and indices.
MPI messaging functions accept a datatype parameter, whose C typedef isMPI_Datatype:
~~~
MPI_Send(void* buf, int count, MPI_Datatype datatype,
int dest, int tag, MPI_Comm comm);
~~~
### Basic Datatypes
Everybody uses the primitive machine datatypes. Some C examples arelisted below (with the corresponding C datatype in parentheses):
~~~
MPI_CHAR (char)
MPI_INT (int)
MPI_FLOAT (float)
MPI_DOUBLE (double)
~~~
The count parameter in MPI_Send( ) refers to the number of elements ofthe given datatype, not the total number of bytes.
For messages consisting of a homogeneous, contiguous array of basicdatatypes, this is the end of the datatype discussion. For messagesthat contain more than one datatype or whose elements are not storedcontiguously in memory, something more is needed.
### Strided Vector
Consider a mesh application with patches of a 2D array assigned todifferent processes. The internal boundary rows and columns aretransferred between north/south and east/west processes in the overallmesh. In C, the transfer of a row in a 2D array is simple - acontiguous vector of elements equal in number to the number of columnsin the 2D array. Conversely, storage of the elements of a singlecolumn are dispersed in memory; each vector element separated from itsnext and previous indices by the size of one entire row.
An MPI derived datatype is a good solution for a non-contiguous datastructure. A code fragment to derive an appropriate datatype matchingthis strided vector and then transmit the last column is listed below:
~~~
#include <mpi.h>
{
float mesh[10][20];
int dest, tag;
MPI_Datatype newtype;
/*
* Do this once.
*/
MPI_Type_vector(10, /* # column elements */
1, /* 1 column only */
20, /* skip 20 elements */
MPI_FLOAT, /* elements are float */
&newtype); /* MPI derived datatype */
MPI_Type_commit(&newtype);
/*
* Do this for every new message.
*/
MPI_Send(&mesh[0][19], 1, newtype,
dest, tag, MPI_COMM_WORLD);
}
~~~
MPI_Type_commit( ) separates the datatypes you really want to save anduse from the intermediate ones that are scaffolded on the way to somevery complex datatype.
A nice feature of MPI derived datatypes is that once created, they canbe used repeatedly with no further set-up code. MPI has many otherderived datatype constructors.
### C Structure
Consider an imaging application that is transferring fixed length scanlines of eight bit color pixels. Coupled with the pixel array is thescan line number, an integer. The message might be described in C as astructure:
~~~
struct {
int lineno;
char pixels[1024];
} scanline;
~~~
In addition to a derived datatype, message packing is a useful methodfor sending non-contiguous and/or heterogeneous data. A code fragmentto pack and send the above structure is listed below:
~~~
#include <mpi.h>
{
unsigned int membersize, maxsize;
int position;
int dest, tag;
char *buffer;
/*
* Do this once.
*/
MPI_Pack_size(1, /* one element */
MPI_INT, /* datatype integer */
MPI_COMM_WORLD, /* consistent comm. */
&membersize); /* max packing space req'd */
maxsize = membersize;
MPI_Pack_size(1024, MPI_CHAR, MPI_COMM_WORLD, &membersize);
maxsize += membersize;
buffer = malloc(maxsize);
/*
* Do this for every new message.
*/
position = 0;
MPI_Pack(&scanline.lineno, /* pack this element */
1, /* one element */
MPI_INT, /* datatype int */
buffer, /* packing buffer */
maxsize, /* buffer size */
&position, /* next free byte offset */
MPI_COMM_WORLD); /* consistent comm. */
MPI_Pack(scanline.pixels, 1024, MPI_CHAR,
buffer, maxsize, &position, MPI_COMM_WORLD);
MPI_Send(buffer, position, MPI_PACKED,
dest, tag, MPI_COMM_WORLD);
}
~~~
A buffer is allocated once to contain the size of the packedstructure. The size must be computed because of implementationdependent overhead in the message. Variable sized messages can behandled by allocating a buffer large enough for the largest possiblemessage. The position parameter to MPI_Pack( ) always returns thecurrent size of the packed buffer.
A code fragment to unpack the message, assuming a receive buffer hasbeen allocated, is listed below:
~~~
{
int src;
int msgsize;
MPI_Status status;
MPI_Recv(buffer, maxsize, MPI_PACKED,
src, tag, MPI_COMM_WORLD, &status);
position = 0;
MPI_Get_count(&status, MPI_PACKED, &msgsize);
MPI_Unpack(buffer, /* packing buffer */
msgsize, /* buffer size */
&position, /* next element byte offset */
&scanline.lineno, /* unpack this element */
1, /* one element */
MPI_INT, /* datatype int */
MPI_COMM_WORLD); /* consistent comm. */
MPI_Unpack(buffer, msgsize, &position,
scanline.pixels, 1024, MPI_CHAR, MPI_COMM_WORLD);
}
~~~
You should be able to modify the above code fragments for anystructure. It is completely possible to alter the number of elementsto unpack based on application information unpacked previously in thesame message.
MPI master & slave 模式的基本框架
最后更新于:2022-04-01 09:51:27
rank 0为master其余都是slave
http://www.lam-mpi.org/tutorials/one-step/ezstart.php
~~~
#include <mpi.h>
#define WORKTAG 1
#define DIETAG 2
/* Local functions */
static void master(void);
static void slave(void);
static unit_of_work_t get_next_work_item(void);
static void process_results(unit_result_t result);
static unit_result_t do_work(unit_of_work_t work);
int
main(int argc, char **argv)
{
int myrank;
/* Initialize MPI */
MPI_Init(&argc, &argv);
/* Find out my identity in the default communicator */
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
if (myrank == 0) {
master();
} else {
slave();
}
/* Shut down MPI */
MPI_Finalize();
return 0;
}
static void
master(void)
{
int ntasks, rank;
unit_of_work_t work;
unit_result_t result;
MPI_Status status;
/* Find out how many processes there are in the default
communicator */
MPI_Comm_size(MPI_COMM_WORLD, &ntasks);
/* Seed the slaves; send one unit of work to each slave. */
for (rank = 1; rank < ntasks; ++rank) {
/* Find the next item of work to do */
work = get_next_work_item();
/* Send it to each rank */
MPI_Send(&work, /* message buffer */
1, /* one data item */
MPI_INT, /* data item is an integer */
rank, /* destination process rank */
WORKTAG, /* user chosen message tag */
MPI_COMM_WORLD); /* default communicator */
}
/* Loop over getting new work requests until there is no more work
to be done */
work = get_next_work_item();
while (work != NULL) {
/* Receive results from a slave */
MPI_Recv(&result, /* message buffer */
1, /* one data item */
MPI_DOUBLE, /* of type double real */
MPI_ANY_SOURCE, /* receive from any sender */
MPI_ANY_TAG, /* any type of message */
MPI_COMM_WORLD, /* default communicator */
&status); /* info about the received message */
/* Send the slave a new work unit */
MPI_Send(&work, /* message buffer */
1, /* one data item */
MPI_INT, /* data item is an integer */
status.MPI_SOURCE, /* to who we just received from */
WORKTAG, /* user chosen message tag */
MPI_COMM_WORLD); /* default communicator */
/* Get the next unit of work to be done */
work = get_next_work_item();
}
/* There's no more work to be done, so receive all the outstanding
results from the slaves. */
for (rank = 1; rank < ntasks; ++rank) {
MPI_Recv(&result, 1, MPI_DOUBLE, MPI_ANY_SOURCE,
MPI_ANY_TAG, MPI_COMM_WORLD, &status);
}
/* Tell all the slaves to exit by sending an empty message with the
DIETAG. */
for (rank = 1; rank < ntasks; ++rank) {
MPI_Send(0, 0, MPI_INT, rank, DIETAG, MPI_COMM_WORLD);
}
}
static void
slave(void)
{
unit_of_work_t work;
unit_result_t results;
MPI_Status status;
while (1) {
/* Receive a message from the master */
MPI_Recv(&work, 1, MPI_INT, 0, MPI_ANY_TAG,
MPI_COMM_WORLD, &status);
/* Check the tag of the received message. */
if (status.MPI_TAG == DIETAG) {
return;
}
/* Do the work */
result = do_work(work);
/* Send the result back */
MPI_Send(&result, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD);
}
}
static unit_of_work_t
get_next_work_item(void)
{
/* Fill in with whatever is relevant to obtain a new unit of work
suitable to be given to a slave. */
}
static void
process_results(unit_result_t result)
{
/* Fill in with whatever is relevant to process the results returned
by the slave */
}
static unit_result_t
do_work(unit_of_work_t work)
{
/* Fill in with whatever is necessary to process the work and
generate a result */
}
~~~
MPI 每个rank依次往下一个rank发送消息的循环
最后更新于:2022-04-01 09:51:24
https://www.sharcnet.ca/help/index.php/Getting_Started_with_MPI
~~~
#include <stdio.h>
#include <mpi.h>
#define BUFMAX 81
int main(int argc, char *argv[])
{
char outbuf[BUFMAX], inbuf[BUFMAX];
int rank, size;
int sendto, recvfrom;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
sprintf(outbuf, "Hello, world! from process %d of %d", rank, size);
sendto = (rank + 1) % size;
recvfrom = ((rank + size) - 1) % size;
MPI_Send(outbuf, BUFMAX, MPI_CHAR, sendto, 0, MPI_COMM_WORLD);
MPI_Recv(inbuf, BUFMAX, MPI_CHAR, recvfrom, 0, MPI_COMM_WORLD, &status);
printf("[P_%d] process %d said: \"%s\"]\n", rank, recvfrom, inbuf);
MPI_Finalize();
return(0);
}
~~~
上面的实现之所有没有死锁是依赖于数据传输的中间buffer,也就是写的时候直接写到buffer,不需要等到接收端的回应,代码是可以正确运行的,但是他的实现是并不是完全依赖MPI的标准,下面给出一个MPI 安全的例子
~~~
#include <stdio.h>
#include <mpi.h>
#define BUFMAX 81
int main(int argc, char *argv[])
{
char outbuf[BUFMAX], inbuf[BUFMAX];
int rank, size;
int sendto, recvfrom;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
sprintf(outbuf, "Hello, world! from process %d of %d", rank, size);
sendto = (rank + 1) % size;
recvfrom = ((rank + size) - 1) % size;
if (!(rank % 2))
{
MPI_Send(outbuf, BUFMAX, MPI_CHAR, sendto, 0, MPI_COMM_WORLD);
MPI_Recv(inbuf, BUFMAX, MPI_CHAR, recvfrom, 0, MPI_COMM_WORLD, &status);
}
else
{
MPI_Recv(inbuf, BUFMAX, MPI_CHAR, recvfrom, 0, MPI_COMM_WORLD, &status);
MPI_Send(outbuf, BUFMAX, MPI_CHAR, sendto, 0, MPI_COMM_WORLD);
}
printf("[P_%d] process %d said: \"%s\"]\n", rank, recvfrom, inbuf);
MPI_Finalize();
return(0);
}
~~~
这个例子和上面的最大的不同就是根据rank分成两组,这两组的接收和发送的顺序是刚好相反的,这样如果没有buffer,一样可以正确的得到结果,因此是安全的。
openMPI配置
最后更新于:2022-04-01 09:51:22
在所有的节点上,做下面的1—3操作
1,安装openmpi
http://www.open-mpi.org/software/ompi/v1.6/
之前系统默认带有openMPI,先把旧的删掉
2,配置无密码ssh登录:
请参考:[如何配置无密码SSH](http://hanhuzi.blogspot.com/2008/09/ssh.html)
3,在工作目录下编制配置文件(文件名随意),内容包括所有节点的名称,例如:
~~~
#>cat hosts
host1 slots=4
host2 slots=4
host3 slots=4
~~~
————————
所有节点上的工作目录必需相同,配置文件名和内容也必需相同,简单的办法是所有节点的工作目录都mount到同一个nfs输出上
————————
在主节点上写程序并编译执行4—8
4,例子程序:
~~~
[root@dhcp-beijing-cdc-10-182-120-155 ~]# cat hello.c
#include
#include "mpi.h"
int main(int argc, char *argv[])
{
int nproc;
int iproc;
char proc_name[MPI_MAX_PROCESSOR_NAME];
int nameLength;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&nproc);
MPI_Comm_rank(MPI_COMM_WORLD,&iproc);
MPI_Get_processor_name(proc_name,&nameLength);
printf("Hello World, Iam host %s with rank %d of %d\n", proc_name,iproc,nproc);
MPI_Finalize();
return 0;
}
~~~
5,编译这个例子:
~~~
mpicc -o hello hello.c
~~~
6,拷贝可执行文件hello到所有节点的相同工作目录(如果使用nfs可忽略此步)
7,执行hello
~~~
#> mpirun -np 2 hello -hostfiles hosts
mpirun -hostfile ./hosts ./hello_cxx
~~~
(上面的数字2代表在两个节点上运行hello)
此时,是不是已经看到了来自其它节点的问候?
8,当然,你可以用其他命令来替换hello比如:
~~~
#> mpirun -np 3 ls -hostfiles hosts
~~~
此命令在各节点上执行ls操作,并显示结果在主节点上。
Using the Intel® MPI Library in a server/client setup
最后更新于:2022-04-01 09:51:20
http://www.mpi-forum.org/docs/mpi-20-html/node106.htm
http://software.intel.com/zh-cn/articles/using-the-intel-mpi-library-in-a-serverclient-setup
如何用MPI创建server client架构
1) Build 2 applications. Both uses MSMPI and share the same COMM WORLD in the API.
2) Run the 2 application at the same time like below:
job submited /numodes:3 /askednodes:server1,node1,node2 mpiexec -hosts 1 server1 1 win-form-app.exe : -hosts 2 node1 1 node2 1 console-app.exe
where:
- totally 3 nodes are used. each node runs 1 process. If you want to run more than 1 process on certain node. you can do -hosts 2 node1 M -node2 N ...
- server1 will run your win form application; node1 and node2 will run the console application.
http://www.mpi-forum.org/docs/mpi-20-html/node106.htm
Submitted by [James Tullos (Intel)](http://software.intel.com/en-us/user/540584 "View user profile.") on Fri, 07/13/2012 - 08:59
Categories:
- [Cluster Computing](http://software.intel.com/en-us/search/site/?f[0]=im_field_topic%3A20862)
- [Intel® MPI Library](http://software.intel.com/en-us/search/site/?f[0]=im_field_software_product%3A20823&f[1]=im_field_software_product%3A20827)
- [Intel® Cluster Studio](http://software.intel.com/en-us/search/site/?f[0]=im_field_software_product%3A20834&f[1]=im_field_software_product%3A20844)
- [Intel® Cluster Studio XE](http://software.intel.com/en-us/search/site/?f[0]=im_field_software_product%3A20834&f[1]=im_field_software_product%3A35847)
- [Linux*](http://software.intel.com/en-us/search/site/?f[0]=im_field_operating_system%3A20787)
- [Microsoft Windows* (XP, Vista, 7)](http://software.intel.com/en-us/search/site/?f[0]=im_field_operating_system%3A36914)
- [Intermediate](http://software.intel.com/en-us/search/site/?f[0]=im_field_skill_level%3A20808)
Tags:
- [server](http://software.intel.com/en-us/search/site/?f[0]=im_field_tags%3A17339)
- [MPI](http://software.intel.com/en-us/search/site/?f[0]=im_field_tags%3A19522)
- [client](http://software.intel.com/en-us/search/site/?f[0]=im_field_tags%3A19916)
- [Intel® MPI Library](http://software.intel.com/en-us/search/site/?f[0]=im_field_tags%3A41531)
- [Intel® Cluster Studio XE](http://software.intel.com/en-us/search/site/?f[0]=im_field_tags%3A41532)
Overview
In some instances, it can be advantageous to have an MPI program join a job after it has started. Additional resources can be added to a long job as they become available, or a more traditional server/client program can be created. This can be facilitated with the MPI_Comm_accept and MPI_Comm_connect functions.
Key Functions
- **MPI_Open_port**- Creates the port that is used for the communications. This port is given a name that is used to reference it later, both by the server and the client. Only the server program calls MPI_Open_port
- **MPI_Comm_accept**- Uses the previously opened port to listen for a connecting MPI program. This is called by the server and will create an intercommunicator once it completes.
- **MPI_Comm_connect**- Connects to another MPI program at the named port. This is called by the client and will create an intercommunicator once it completes.
Notes
- The programs must use the same fabric in order to connect, as the port is dependent on the fabric.
- The programs must be on the same operating system in order to connect. Different versions/distributions of the same operating systems could work, this has not been tested and is not supported.
- The method of getting the port name from the server to the client can vary. In the sample provided, a text file is written containing the port name.
Example
A very simple example is attached to this article. The server opens a port, writes the name of the port to a file, and waits for the client. The client will read the file and attempt to connect to the port. To verify that the two programs are connected, each sends a pre-defined value to the other. To compile and run the example, download the files and place them in the same folder. Open two terminals and navigate to the folder where the files are located. In the first terminal, use:
~~~
mpiicpc
server.cpp -o server
mpirun
-n 1 ./server
~~~
And in the second terminal:
~~~
mpiicpc
client.cpp -o client
mpirun
-n 1 ./client
~~~
In Windows*, change **mpirun**to **mpiexec**. With the code as provided, the server should show:
~~~
Waiting
for a client
A
client has connected
The
server sent the value: 25
The
server received the value: 42
~~~
And the client should show:
~~~
Attempting
to connect
Connected
to the server
The
client sent the value: 42
The
client received the value: 25
~~~
5.4.6.3. Simple Client-Server Example.
Up: Client/Server Examples Next: Other Functionality Previous: Ocean/Atmosphere - Relies on Name Publishing
This is a simple example; the server accepts only a single connection at a time and serves that connection until the client requests to be disconnected. The server is a single process.
Here is the server. It accepts a single connection and then processes data until it receives a message with tag 1. A message with tag 0 tells the server to exit.
~~~
#include "mpi.h"
int main( int argc, char **argv )
{
MPI_Comm client;
MPI_Status status;
char port_name[MPI_MAX_PORT_NAME];
double buf[MAX_DATA];
int size, again;
MPI_Init( &argc, &argv );
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (size != 1) error(FATAL, "Server too big");
MPI_Open_port(MPI_INFO_NULL, port_name);
printf("server available at %s\n",port_name);
while (1) {
MPI_Comm_accept( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD,
&client );
again = 1;
while (again) {
MPI_Recv( buf, MAX_DATA, MPI_DOUBLE,
MPI_ANY_SOURCE, MPI_ANY_TAG, client, &status );
switch (status.MPI_TAG) {
case 0: MPI_Comm_free( &client );
MPI_Close_port(port_name);
MPI_Finalize();
return 0;
case 1: MPI_Comm_disconnect( &client );
again = 0;
break;
case 2: /* do something */
...
default:
/* Unexpected message type */
MPI_Abort( MPI_COMM_WORLD, 1 );
}
}
}
}
~~~
Here is the client.
~~~
#include "mpi.h"
int main( int argc, char **argv )
{
MPI_Comm server;
double buf[MAX_DATA];
char port_name[MPI_MAX_PORT_NAME];
MPI_Init( &argc, &argv );
strcpy(port_name, argv[1] );/* assume server's name is cmd-line arg */
MPI_Comm_connect( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD,
&server );
while (!done) {
tag = 2; /* Action to perform */
MPI_Send( buf, n, MPI_DOUBLE, 0, tag, server );
/* etc */
}
MPI_Send( buf, 0, MPI_DOUBLE, 0, 1, server );
MPI_Comm_disconnect( &server );
MPI_Finalize();
return 0;
}
~~~
并行计算 写给自己的困惑
最后更新于:2022-04-01 09:51:17
首先说一下自己对MPI和多线程区别的认识,一直认为MPI就是一线程池,那既然如此MPI的为什么还要存在?难道仅仅是为了漂亮的接口吗?显然不是,还由一点容易别忽略的就是多台计算机之间的MPI,这个很重要,因为多线程干这个自己管理网络的部分太复杂了,而且容易出错,但是MPI把这些都包了,非常方便,它可以作为整个软件架构的基础。
另外OpenMP和OpenMPI是两个完全不同的东西,一个只支持内存共享的单机多核,另一个则支持集群计算,
前言
最后更新于:2022-04-01 09:51:15
> 原文出处:[MPI 分布式编程](http://blog.csdn.net/column/details/mpipractice.html)
作者:[wangeen](http://blog.csdn.net/wangeen)
**本系列文章经作者授权在看云整理发布,未经作者允许,请勿转载!**
# MPI 分布式编程
> MPI是分布式计算的一种接口的定义,本栏目会介绍一些MPI分布式计算的代码总结和经验小节,可以从总体上把握MPI框架和利用这个框架设计程序的一般思路。