However, there are more practical system constraints (for example, multichip pinout, message routing delay times and message routing decision times) for which meshes are nonoptimal, and so improved system topologies must be found. The need for better parallel architectures is made even more urgent when we consider the difficulties of porting general inhomogeneous algorithms to mesh architectures and their diminishing performance as the number of nodes increases.
Figure 1. 4×4 Processor DC-Hypermesh Architecture
(P = processor, C = Comms node)
Hypermesh architectures have communications links which span a number of processing nodes. Groups of nodes are connected by bus, crossbar switch, or by distributed crossbars. Figure 1 shows a Distributed Crossbar (DC) architecture, where somewhat increased wiring density is traded for increased communications bandwidth and shorter message passing latency. We have shown the following benefits for the DC Hypermesh:
|Months 010 :||Design of a four node hypermesh cluster based on PowerPC 604 processors connected via PCI interfaces. Design of component communications logic ICS and PCB design for the new interface cards.|
|Months 1115 :||Construction and logic/communications test of the four node hypermesh cluster.|
|Months 1316 :||Porting of fully distributed operating system.|
|Months 1618 :||Performance testing.|
|Months 1624 :||Evaluation of parallelised software, harness schemes and operating system.|
|Months 624 :||Develop industrial partnership.|
Figure B1. Data routing in a Comms Node of the DC-Hypermesh.
Figure B2. Implementation of DC-Hypermesh Comms Node.