aelius Posted March 4, 2014 Report Posted March 4, 2014 The Open MPI Project is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.Features implemented or in short-term development for Open MPI include:- Full MPI-3 standards conformance- Thread safety and concurrency- Dynamic process spawning- Network and process fault tolerance- Support network heterogeneity- Single library supports all networks- Run-time instrumentation- Many job schedulers supported- Many OS's supported (32 and 64 bit)- Production quality software- High performance on all platforms- Portable and maintainable- Tunable by installers and end-users- Component-based design, documented APIs- Active, responsive mailing list- Open source license based on the BSD licenseOfficial page: Open MPI: Open Source High Performance ComputingDocumentation: Open MPI v1.6.4 documentation Quote
pyth0n3 Posted March 5, 2014 Report Posted March 5, 2014 Acum cativa ani am scris un tutorial despre OpenMPI i OpenMPI, How Does Work? ii Building a distributed resource cluster iii Setting up OpenMPI iiii Running the code iiiii John the Ripper with OpenMPICod pentru exemplul din tutorial/* test of MPI */ #include "mpi.h" #include <stdio.h> #include <string.h> int main(int argc, char **argv) { char idstr[2232]; char buff[22128]; char processor_name[MPI_MAX_PROCESSOR_NAME]; int numprocs; int myid; int i; int namelen; MPI_Status stat; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Get_processor_name(processor_name, &namelen); if(myid == 0) { printf("WE have %d processors\n", numprocs); for(i=1;i<numprocs;i++) { sprintf(buff, "Hello %d", i); MPI_Send(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD); } for(i=1;i<numprocs;i++) { MPI_Recv(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD, &stat); printf("%s\n", buff); } } else { MPI_Recv(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD, &stat); sprintf(idstr, " Processor %d at node %s ", myid, processor_name); strcat(buff, idstr); strcat(buff, "reporting for duty"); MPI_Send(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD); } MPI_Finalize(); } 1 Quote
aelius Posted March 5, 2014 Author Report Posted March 5, 2014 Lume noua, bine ai revenit pyth0n3 Quote