Raising the level of abstraction for developing message passing applications
Abstract Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle...
Ausführliche Beschreibung
Autor*in: |
Arora, Ritu [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2010 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer Science+Business Media, LLC 2010 |
---|
Übergeordnetes Werk: |
Enthalten in: The journal of supercomputing - Springer US, 1987, 59(2010), 2 vom: 09. Nov., Seite 1079-1100 |
---|---|
Übergeordnetes Werk: |
volume:59 ; year:2010 ; number:2 ; day:09 ; month:11 ; pages:1079-1100 |
Links: |
---|
DOI / URN: |
10.1007/s11227-010-0490-3 |
---|
Katalog-ID: |
OLC2033938662 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2033938662 | ||
003 | DE-627 | ||
005 | 20230504053716.0 | ||
007 | tu | ||
008 | 200819s2010 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s11227-010-0490-3 |2 doi | |
035 | |a (DE-627)OLC2033938662 | ||
035 | |a (DE-He213)s11227-010-0490-3-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |a 620 |q VZ |
100 | 1 | |a Arora, Ritu |e verfasserin |4 aut | |
245 | 1 | 0 | |a Raising the level of abstraction for developing message passing applications |
264 | 1 | |c 2010 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer Science+Business Media, LLC 2010 | ||
520 | |a Abstract Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code. | ||
650 | 4 | |a Parallel programming | |
650 | 4 | |a Explicit parallelization | |
650 | 4 | |a MPI | |
650 | 4 | |a Abstraction | |
650 | 4 | |a Generative programming | |
650 | 4 | |a Domain-specific language | |
700 | 1 | |a Bangalore, Purushotham |4 aut | |
700 | 1 | |a Mernik, Marjan |4 aut | |
773 | 0 | 8 | |i Enthalten in |t The journal of supercomputing |d Springer US, 1987 |g 59(2010), 2 vom: 09. Nov., Seite 1079-1100 |w (DE-627)13046466X |w (DE-600)740510-8 |w (DE-576)018667775 |x 0920-8542 |7 nnns |
773 | 1 | 8 | |g volume:59 |g year:2010 |g number:2 |g day:09 |g month:11 |g pages:1079-1100 |
856 | 4 | 1 | |u https://doi.org/10.1007/s11227-010-0490-3 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-TEC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a GBV_ILN_70 | ||
951 | |a AR | ||
952 | |d 59 |j 2010 |e 2 |b 09 |c 11 |h 1079-1100 |
author_variant |
r a ra p b pb m m mm |
---|---|
matchkey_str |
article:09208542:2010----::asnteeeoasrcinodvlpnmsae |
hierarchy_sort_str |
2010 |
publishDate |
2010 |
allfields |
10.1007/s11227-010-0490-3 doi (DE-627)OLC2033938662 (DE-He213)s11227-010-0490-3-p DE-627 ger DE-627 rakwb eng 004 620 VZ Arora, Ritu verfasserin aut Raising the level of abstraction for developing message passing applications 2010 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC 2010 Abstract Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code. Parallel programming Explicit parallelization MPI Abstraction Generative programming Domain-specific language Bangalore, Purushotham aut Mernik, Marjan aut Enthalten in The journal of supercomputing Springer US, 1987 59(2010), 2 vom: 09. Nov., Seite 1079-1100 (DE-627)13046466X (DE-600)740510-8 (DE-576)018667775 0920-8542 nnns volume:59 year:2010 number:2 day:09 month:11 pages:1079-1100 https://doi.org/10.1007/s11227-010-0490-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-TEC SSG-OLC-MAT GBV_ILN_70 AR 59 2010 2 09 11 1079-1100 |
spelling |
10.1007/s11227-010-0490-3 doi (DE-627)OLC2033938662 (DE-He213)s11227-010-0490-3-p DE-627 ger DE-627 rakwb eng 004 620 VZ Arora, Ritu verfasserin aut Raising the level of abstraction for developing message passing applications 2010 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC 2010 Abstract Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code. Parallel programming Explicit parallelization MPI Abstraction Generative programming Domain-specific language Bangalore, Purushotham aut Mernik, Marjan aut Enthalten in The journal of supercomputing Springer US, 1987 59(2010), 2 vom: 09. Nov., Seite 1079-1100 (DE-627)13046466X (DE-600)740510-8 (DE-576)018667775 0920-8542 nnns volume:59 year:2010 number:2 day:09 month:11 pages:1079-1100 https://doi.org/10.1007/s11227-010-0490-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-TEC SSG-OLC-MAT GBV_ILN_70 AR 59 2010 2 09 11 1079-1100 |
allfields_unstemmed |
10.1007/s11227-010-0490-3 doi (DE-627)OLC2033938662 (DE-He213)s11227-010-0490-3-p DE-627 ger DE-627 rakwb eng 004 620 VZ Arora, Ritu verfasserin aut Raising the level of abstraction for developing message passing applications 2010 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC 2010 Abstract Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code. Parallel programming Explicit parallelization MPI Abstraction Generative programming Domain-specific language Bangalore, Purushotham aut Mernik, Marjan aut Enthalten in The journal of supercomputing Springer US, 1987 59(2010), 2 vom: 09. Nov., Seite 1079-1100 (DE-627)13046466X (DE-600)740510-8 (DE-576)018667775 0920-8542 nnns volume:59 year:2010 number:2 day:09 month:11 pages:1079-1100 https://doi.org/10.1007/s11227-010-0490-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-TEC SSG-OLC-MAT GBV_ILN_70 AR 59 2010 2 09 11 1079-1100 |
allfieldsGer |
10.1007/s11227-010-0490-3 doi (DE-627)OLC2033938662 (DE-He213)s11227-010-0490-3-p DE-627 ger DE-627 rakwb eng 004 620 VZ Arora, Ritu verfasserin aut Raising the level of abstraction for developing message passing applications 2010 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC 2010 Abstract Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code. Parallel programming Explicit parallelization MPI Abstraction Generative programming Domain-specific language Bangalore, Purushotham aut Mernik, Marjan aut Enthalten in The journal of supercomputing Springer US, 1987 59(2010), 2 vom: 09. Nov., Seite 1079-1100 (DE-627)13046466X (DE-600)740510-8 (DE-576)018667775 0920-8542 nnns volume:59 year:2010 number:2 day:09 month:11 pages:1079-1100 https://doi.org/10.1007/s11227-010-0490-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-TEC SSG-OLC-MAT GBV_ILN_70 AR 59 2010 2 09 11 1079-1100 |
allfieldsSound |
10.1007/s11227-010-0490-3 doi (DE-627)OLC2033938662 (DE-He213)s11227-010-0490-3-p DE-627 ger DE-627 rakwb eng 004 620 VZ Arora, Ritu verfasserin aut Raising the level of abstraction for developing message passing applications 2010 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC 2010 Abstract Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code. Parallel programming Explicit parallelization MPI Abstraction Generative programming Domain-specific language Bangalore, Purushotham aut Mernik, Marjan aut Enthalten in The journal of supercomputing Springer US, 1987 59(2010), 2 vom: 09. Nov., Seite 1079-1100 (DE-627)13046466X (DE-600)740510-8 (DE-576)018667775 0920-8542 nnns volume:59 year:2010 number:2 day:09 month:11 pages:1079-1100 https://doi.org/10.1007/s11227-010-0490-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-TEC SSG-OLC-MAT GBV_ILN_70 AR 59 2010 2 09 11 1079-1100 |
language |
English |
source |
Enthalten in The journal of supercomputing 59(2010), 2 vom: 09. Nov., Seite 1079-1100 volume:59 year:2010 number:2 day:09 month:11 pages:1079-1100 |
sourceStr |
Enthalten in The journal of supercomputing 59(2010), 2 vom: 09. Nov., Seite 1079-1100 volume:59 year:2010 number:2 day:09 month:11 pages:1079-1100 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Parallel programming Explicit parallelization MPI Abstraction Generative programming Domain-specific language |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
The journal of supercomputing |
authorswithroles_txt_mv |
Arora, Ritu @@aut@@ Bangalore, Purushotham @@aut@@ Mernik, Marjan @@aut@@ |
publishDateDaySort_date |
2010-11-09T00:00:00Z |
hierarchy_top_id |
13046466X |
dewey-sort |
14 |
id |
OLC2033938662 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2033938662</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230504053716.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2010 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11227-010-0490-3</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2033938662</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11227-010-0490-3-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="a">620</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Arora, Ritu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Raising the level of abstraction for developing message passing applications</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2010</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC 2010</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Parallel programming</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Explicit parallelization</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">MPI</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Abstraction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Generative programming</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Domain-specific language</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Bangalore, Purushotham</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mernik, Marjan</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">The journal of supercomputing</subfield><subfield code="d">Springer US, 1987</subfield><subfield code="g">59(2010), 2 vom: 09. Nov., Seite 1079-1100</subfield><subfield code="w">(DE-627)13046466X</subfield><subfield code="w">(DE-600)740510-8</subfield><subfield code="w">(DE-576)018667775</subfield><subfield code="x">0920-8542</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:59</subfield><subfield code="g">year:2010</subfield><subfield code="g">number:2</subfield><subfield code="g">day:09</subfield><subfield code="g">month:11</subfield><subfield code="g">pages:1079-1100</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11227-010-0490-3</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-TEC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">59</subfield><subfield code="j">2010</subfield><subfield code="e">2</subfield><subfield code="b">09</subfield><subfield code="c">11</subfield><subfield code="h">1079-1100</subfield></datafield></record></collection>
|
author |
Arora, Ritu |
spellingShingle |
Arora, Ritu ddc 004 misc Parallel programming misc Explicit parallelization misc MPI misc Abstraction misc Generative programming misc Domain-specific language Raising the level of abstraction for developing message passing applications |
authorStr |
Arora, Ritu |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)13046466X |
format |
Article |
dewey-ones |
004 - Data processing & computer science 620 - Engineering & allied operations |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0920-8542 |
topic_title |
004 620 VZ Raising the level of abstraction for developing message passing applications Parallel programming Explicit parallelization MPI Abstraction Generative programming Domain-specific language |
topic |
ddc 004 misc Parallel programming misc Explicit parallelization misc MPI misc Abstraction misc Generative programming misc Domain-specific language |
topic_unstemmed |
ddc 004 misc Parallel programming misc Explicit parallelization misc MPI misc Abstraction misc Generative programming misc Domain-specific language |
topic_browse |
ddc 004 misc Parallel programming misc Explicit parallelization misc MPI misc Abstraction misc Generative programming misc Domain-specific language |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
The journal of supercomputing |
hierarchy_parent_id |
13046466X |
dewey-tens |
000 - Computer science, knowledge & systems 620 - Engineering |
hierarchy_top_title |
The journal of supercomputing |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)13046466X (DE-600)740510-8 (DE-576)018667775 |
title |
Raising the level of abstraction for developing message passing applications |
ctrlnum |
(DE-627)OLC2033938662 (DE-He213)s11227-010-0490-3-p |
title_full |
Raising the level of abstraction for developing message passing applications |
author_sort |
Arora, Ritu |
journal |
The journal of supercomputing |
journalStr |
The journal of supercomputing |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works 600 - Technology |
recordtype |
marc |
publishDateSort |
2010 |
contenttype_str_mv |
txt |
container_start_page |
1079 |
author_browse |
Arora, Ritu Bangalore, Purushotham Mernik, Marjan |
container_volume |
59 |
class |
004 620 VZ |
format_se |
Aufsätze |
author-letter |
Arora, Ritu |
doi_str_mv |
10.1007/s11227-010-0490-3 |
dewey-full |
004 620 |
title_sort |
raising the level of abstraction for developing message passing applications |
title_auth |
Raising the level of abstraction for developing message passing applications |
abstract |
Abstract Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code. © Springer Science+Business Media, LLC 2010 |
abstractGer |
Abstract Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code. © Springer Science+Business Media, LLC 2010 |
abstract_unstemmed |
Abstract Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code. © Springer Science+Business Media, LLC 2010 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-TEC SSG-OLC-MAT GBV_ILN_70 |
container_issue |
2 |
title_short |
Raising the level of abstraction for developing message passing applications |
url |
https://doi.org/10.1007/s11227-010-0490-3 |
remote_bool |
false |
author2 |
Bangalore, Purushotham Mernik, Marjan |
author2Str |
Bangalore, Purushotham Mernik, Marjan |
ppnlink |
13046466X |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11227-010-0490-3 |
up_date |
2024-07-03T18:58:53.315Z |
_version_ |
1803585462801006592 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2033938662</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230504053716.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200819s2010 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11227-010-0490-3</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2033938662</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11227-010-0490-3-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="a">620</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Arora, Ritu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Raising the level of abstraction for developing message passing applications</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2010</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC 2010</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Message Passing Interface (MPI) is the most popular standard for writing portable and scalable parallel applications for distributed memory architectures. Writing efficient parallel applications using MPI is a complex task, mainly due to the extra burden on programmers to explicitly handle all the complexities of message-passing (viz., inter-process communication, data distribution, load-balancing, and synchronization). The main goal of our research is to raise the level of abstraction of explicit parallelization using MPI such that the effort involved in developing parallel applications is significantly reduced in terms of the reduction in the amount of code written manually while avoiding intrusive changes to existing sequential programs. In this research, generative programming tools and techniques are combined with a domain-specific language, Hi-PaL (High-Level Parallelization Language), for automating the process of generating and inserting the required code for parallelization into the existing sequential applications. The results show that the performance of the generated applications is comparable to the manually written versions of the applications, while requiring no explicit changes to the existing sequential code.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Parallel programming</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Explicit parallelization</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">MPI</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Abstraction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Generative programming</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Domain-specific language</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Bangalore, Purushotham</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mernik, Marjan</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">The journal of supercomputing</subfield><subfield code="d">Springer US, 1987</subfield><subfield code="g">59(2010), 2 vom: 09. Nov., Seite 1079-1100</subfield><subfield code="w">(DE-627)13046466X</subfield><subfield code="w">(DE-600)740510-8</subfield><subfield code="w">(DE-576)018667775</subfield><subfield code="x">0920-8542</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:59</subfield><subfield code="g">year:2010</subfield><subfield code="g">number:2</subfield><subfield code="g">day:09</subfield><subfield code="g">month:11</subfield><subfield code="g">pages:1079-1100</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11227-010-0490-3</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-TEC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">59</subfield><subfield code="j">2010</subfield><subfield code="e">2</subfield><subfield code="b">09</subfield><subfield code="c">11</subfield><subfield code="h">1079-1100</subfield></datafield></record></collection>
|
score |
7.3986397 |