# Knowledge Base

Lumerical photonic design products are capable of single system parallel computing. Utilizing multiple cores or processors on your machine when running simulations, provides better performance than using only 1 core.

This page describes how to configure your system so you can run jobs on several computers within your local area network. Lumerical's design software must be installed on each computer, as described in the main Installation instructions section, before you attempt to configure the solvers or simulation engines.

There are two ways of running the simulations across multiple computers:

Concurrent Computing, refers to running multiple simulations at the same time on several computers or compute nodes.

Distributed Computing, refers to running a single simulation using multiple computers or compute nodes, giving access to a greater total amount of memory and reducing computation time.

## Notes

Running multiple simulations across several computers simultaneously, will require as many licenses as the number of computers running the simulations. eg. #licenses = #nodes

All computers must run on the same operating system.

Product software must be of the same version on all computers or nodes.

Concurrent computing is currently supported by all Lumerical design software.

Distributed computing across multiple computers or compute nodes, on the other hand is only available for FDTD solver and varFDTD solver in MODE Solutions.

### Contents

 Note: If you have only one computer, skip steps 1-3.

 Many Linux clusters communicate across a private Ethernet network and therefore firewall security may not be required. If no firewall is in use in your network, this step may be skipped. •The MPI processes communicate using a range of ports.  It's easiest to simply disable the firewall on all nodes. An alternate solution is to configure MPI to use a specific range of ports, then create exceptions for those ports.  See the MPI documentation for details. If you want to leave the firewall turned on, two additional firewall exceptions are required: •In some configurations, MPI requires the use of the ssh programs to start remote processes on the compute nodes during parallel execution. Ensure that the ssh port 22 is allowed to accept incoming TCP/IP connections on all of your compute nodes.•The FlexNet License Manager default port list can be found on the license manager configuration page.

2.Setup a network directory

 To launch simulations on an arbitrary node, it's best to setup a network file system. The network file system must be accessible to all nodes under the same name.  This will allow any node to access the simulation files.   •This is most easily accomplished with a network drive.  The network drive should be accessible to all computers using the same Windows UNC path name. For example:o\\server\public\tempoor by the same mapped drive letter, ie. on Windows computers, access the shared folder or drive using the same mapped drive like "Drive X:\Shared"oor by mounting the shared folder on each node on the same location, eg. /mnt/nfs/shared•On many Linux cluster/networks, each User's home directory is a network file system and is common to all nodes. If this is the case you may use your home directory to store your simulation files. •For more information on creating a network file systems, see your operating system documentation.

Windows:

Use MPICH2 as the job launching preset and set the login credentials beforehand. See Setting MPICH2 credentials for details.

Linux:

Configure your compute nodes to allow remote login without a password, as the version of MPICH2 included with the installation package uses ssh to start remote jobs. If this is not configured, the user will have to type their password each time MPICH2 is called to run the simulation. On your primary computer, enter the following commands to create a set of ssh keys.

ssh-keygen -t rsa

cd ~/.ssh

cat >> authorized_keys < id_rsa.pub

Press enter several times to accept all the defaults and an empty passphrase. This creates your public/private keys and saves them in your home directory under the $HOME/.ssh folder. Next you must place your public key in the text file$HOME/.ssh/authorized_keys on each compute node. This can be accomplished using the following commands for each compute node:

ssh <node name> "mkdir -p ~/.ssh; chmod 700 ~/.ssh"

cat ~/.ssh/id_rsa.pub | ssh <node name> "cat >> ~/.ssh/authorized_keys"

ssh <node name> "chmod 700 ~/.ssh/authorized_keys"

ssh <node_name>

### Note

If your home directory is on a network file system and is the same directory for all compute nodes, then you will only have to run the above command one time. Once you have completed this step, you should be able to login to any of the compute nodes using the ssh <node name> command without entering a password.

If you need to find the IP address of a computer, use the command /sbin/ifconfig eth0.

4.Check resources

1.Open the Lumerical software (FDTD and MODE Solutions only)

2.Open the Resource configuration utility, (in the Simulation -> Configure resources menu)
MODE Solutions Only: Notice that Resources are configured on a per-Solver basis (one solver per tab),
and only Var FDTD Solver allow for parallel computing, and change the number of processes.

4.Edit each resource properties as needed.

5.Use the Run tests button to confirm the resources are setup properly

### Note

Windows: Use MPICH2 as the Job launching preset.

To use different MPI variants, see the links below that corresponds to your operating system.

oWindows: run solver with MPI

oLinux: run solver with MPI

oMacOS: run solver with MPI

Mode varFDTD solver tab

Resources using multiple computers

5.Run a parameter sweep example

 Run the s-parameter sweep from the example Y-Branch, which can be downloaded from within FDTD Solutions. Follow the instructions from our website to run the simulation and the sweep and compare results: App Gallery: Y-Branch   Note: Running the simulation creates a log file named _p0.log.  This log file can be helpful when debugging problems with simulations running across several computers.

### Next Steps

Copyright Lumerical Inc. | Privacy | Site Map