Journal Menu
Editors Overview
rtpc maintains an Editorial Board of practicing researchers from around the world, to ensure manuscripts are handled by editors who are experts in the field of study.
About the Journal
Recent Trends in Parallel Computing [2393-8749(e)] is a peer-reviewed hybrid open-access journal launched in 2014. Parallel computing is a form of computation in which many calculations can be done at the same time and it works on the principle that large problems can often be divided into smaller ones, which are then solved in parallel. Specialized parallel computer architectures are sometimes used aboard traditional processors, to quicken specific tasks this increases the speed of execution of the task.
2
Papers Published
Publication Charge
Open
Access
Focus and Scope
- Tree, Diamond Network, Mesh, Linear Array, Star, Hypercube, Chordal ring, Cube-connected-cycles: Cayley graphs, communication networks, diameter,(d,k) graphs, parallel processing architectures, VLSI layouts, hypercubic networks, N-node hypercube,cube-connected cycles, multilayer grid model, butterfly networks, generalized hypercubes, hierarchical swapped networks, indirect swapped networks, folded hypercubes, reduced hypercubes, recursive hierarchical swapped networks,enhanced-cubes, All-to-all broadcast, Cube-connected cycle, Hypercube, Multihop routing, Wavelength division multiplexing.
- ILLIAC IV, Torus, PM2I, Butterfly, Mesh-of-tree: Network-on-Chip, Mesh-of-Tree topology, Spare core, Communication cost, Integer Linear Programming, Particle Swarm Optimization, Interconnection network, MoT, DOR, NoC, Deterministic routing, Routing, Throughput, Topology, Routing protocols, IP networks, Computer architecture, System-on-a-chip, Network-on-a-chip, Scalability, Parallel processing, Network topology, Computer architecture, System-on-a-chip, System recovery, Network servers, Region 10, Computational modeling.
- Pyramid, Generalized Hypercube: Feature extraction, Three-dimensional displays, Hypercubes, Training, Image color analysis, Robots, Object recognition, convolutional hypercube pyramid, RGB-D object category, instance recognition, deep learning, computer vision, RGB-D images, training data deficiency, multimodality input dissimilarity, RGB-D object recognition, point cloud data, convolutional neural network, CNN, coarse-to-fine feature representation, fusion scheme, classification, extreme learning machines, ELM, nonlinear classifiers, Interconnection networks, graph embeddings, incomplete hypercubes, shuffle-trees, pyramids, the mesh of trees.
- Twisted Cube, Folded Hypercube: Hypercubes, Fault tolerance, Robustness, Multiprocessor interconnection networks, Computer science, Performance analysis, Routing, Computer architecture, Fault-tolerant systems, Multiprocessing systems, operationally enhanced folded hypercubes, performance, reliability, operation mode, fault-tolerance, twisted hypercube.
- Cross-connected Cube: Cross-connected bidirectional pyramid network (CBP-Net), infrared small-dim target detection, regular constraint loss (RCL), region of interest (ROI) feature augment, Feature extraction, Object detection, Proposals, Convolution, Loss measurement Information filters, Signal to noise ratio, Product Space, Finite Automaton, Absolute Sense, Focal Condition, Spontaneous Generation.
- Parallel Architectures: Shared Memory, Scalable Multi-Processors, Interconnection networks: Parallel Machine, Context Switch, Runtime System, Memory Controller, Cache Coherence, Cache Size, Memory Block, Cache Coherence, Directory Scheme, Directory Entry, Shared Memory, Interconnection Network, Total Execution Time, Page Fault, Parallel Speedup, Shared Memory, Memory Block, Error Recovery, Memory Element, Faulty Node.
- Task and Data parallelism, Programming for performance: Message Passing Interface, Runtime System, Data Parallelism, Task Parallelism, High-Performance Fortran, Schedule Policy, Task Schedule, Execution Model, Runtime System, Load Imbalance, Parallel Composition, Execution Model, Communicate Sequential Process, Data Parallelism, Parallel Variable, Parallel Programming, Local View, Data Parallelism, Partial Application, Algorithmic Skeleton, Flow Solver, Task Expression, Data Parallelism, Task Program, Resource Request, Service Time, Cost Model, Input Stream, Stage Pipeline, Relative Speedup.
- Multi-Core programming: Properties of Multi-Core architectures, Pthreads, OpenMP: Main Memory, Execution Model, Runtime System, Heterogeneous Architecture, Embed Memory, Multiprocessor System-on-chip (MPSoC), Platform Description, Control Data Flow Graph (CDFG), Self-timed Scheduling, Dataflow, Newton’s method, OpenMP parallel computing technology, Multithreading, Finite difference, Multi-core processors, Parallelization, Parallel computation, Parallel algorithm, Performance analysis, Reduction Operation, Likelihood Score, Programming Paradigm, Shared Memory Architecture, OpenMP Version, General Purpose Multi-cores, Heterogeneous Multi-cores, Graphical Processing Units, High-Performance Computing, Fine-grain Parallelism, Computer Architectures, Acceleration, Phylogeny, Parallel processing, Multicore processing, Computer applications, Concurrent computing, Bioinformatics, Computer architecture, Scalability, Graphics.
- GPU, Accelerators: Dense Linear Algebra Solvers,GPU Accelerators,Multicore,MAGMA,Hybrid Algorithms,dense linear algebra solvers,multicore systems,GPU accelerators,graphics processing unit,hybridization techniques,Cholesky factorization,LU factorization,QR factorization,parallel programming model,optimized BLAS software,LAPACK software,architecture-specific optimization,algorithm-specific optimization,MAGMA library, Linear algebra,Multicore processing, Acceleration, Iterative algorithms, Linear accelerators, Linear systems, Equations,Computer architecture, Scientific computing, Numerical simulation, Tiles, Kernel, Graphics processing unit,Multicore processing,Libraries,Runtime,Algorithm design and analysis, Cholesky Factorization, Single Precision, Double Precision Arithmetic, Multiple GPUs, Data Tile, GPU,Power consumption,Multi-Processor Allocation,Bandwidth Utilization, optimal multiprocessor allocation algorithm,high performance GPU accelerators,power consumption,heat dissipation,high performance computing systems,cooling infrastructure, bandwidth utilization, BU, MultiProcessor requirements, MPlloc algorithm, performance degradation.
- Multi-core architectures: Computer architecture, Throughput, Yarn, Parallel processing, Performance gain, Multithreading, Delay, Computer science, Milling machines, Microprocessors, single-ISA heterogeneous multicore architectures, multithreaded workload performance, chip multiprocessor,job matching, single-thread performance,thread parallelism, dynamic core assignment policies, static assignment, heterogeneous architectures, multithreading cores, comparable-area homogeneous architecture, naive policy, Bandwidth, Computer architecture, Space technology, Delay,Power system interconnection, Joining processes, Space exploration, Computer science, Design engineering, Power engineering and energy, interconnections, multicore architectures, on-chip interconnects, chip multiprocessor, interconnect architectures, multicore design,interconnect bandwidth, hierarchical bus structure, Thermal management, Multicore processing, Computer architecture,Temperature control, Parallel processing, Microprocessors, Dynamic voltage scaling,Frequency, Degradation, Multi-Core Architectures, Dynamic Thermal Management, Activity Migration, Dynamic Voltage, Frequency Scaling.
- Parallel programs on multi-core machines: Parallelism and concurrency, distributed programming, heterogeneous (hybrid) systems, distributed memory systems, parallel programming, shared memory systems, Computational modeling, Parallel programming, Graphics processing unit, Message systems, Instruction sets, Multicore processing, distributed computing, distributed memory systems, parallel programming, programming environments, message passing, multiprocessing, shared memory systems, Skeleton, Libraries, Instruction sets, Computer architecture, Benchmark testing, Distributed databases, Runtime, PDES,multi-threaded, optimistic simulation,multi-core systems, optimization, Message systems, Computational modeling, Optimization, Receivers, Multicore processing, Synchronization.
Open Access Statement
The Recent Trends in Parallel Computing (rtpc) is an open-access (OA) publication which provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. All published works will be available to a worldwide audience, free, immediately upon publication. Publication in the journal is subject to payment of an article processing charge (APC). The APC serves to support the journal and ensures that articles are freely accessible online in perpetuity under a Creative Commons licenses.
Publication Ethics Statement
rtpc fully adhere to Code of Conduct of Publication Ethics (COPE) and to its Best Practice Guidelines. The Editorial Team enforces a rigorous peer-review process with strict ethical policies and standards to ensure the addition of high-quality scientific studies to the field of scholarly publication. In cases where rtpc becomes aware of ethical issues, it is committed to investigating and taking necessary actions to maintain the integrity of the literature and ensure the safety of research participants. Click here to read more about the Research & Publication virtue ethics
Content Disclaimer
- All the information’s, opinions, and views mentioned here represents the authors and the contributions of the articles.
- Publication of articles, advertisements, or product information does not constitute endorsement or approval by the journal.
- Cannot be help responsible for any error or consequences while using the information updated in this journal.
- Although every effort is done by rtpc to see that there’s no any inaccurate data, misleading data, opinion or statement within the journal, the data and opinions appearing in the articles are the responsibility of the contributors concerned.