• Nenhum resultado encontrado

High performance Ginzburg-Landau simulations of superconductivity

N/A
N/A
Protected

Academic year: 2021

Share "High performance Ginzburg-Landau simulations of superconductivity"

Copied!
160
0
0

Texto

(1)

DARKO STOSIC

HIGH PERFORMANCE GINZBURG-LANDAU SIMULATIONS OF SUPERCONDUCTIVITY

Universidade Federal de Pernambuco posgraduacao@cin.ufpe.br www.cin.ufpe.br/~posgraduacao

Recife 2018

(2)

High-performance Ginzburg-Landau simulations of

superconductivity

Thesis presented to the Programa de Pós-graduação em Ciência da Computação of Centro de Informática of Universidade Federal de Pernambuco and the Doctoral Study Program of University of Antwerp as partial requirement to award the joint degree of Doctor in Computer Science and Physics.

Advisor: Teresa Bernarda

Luder-mir

Joint Advisor:

Milorad Vlado

Miloševi´c

Recife

2018

(3)

Catalogação na fonte

Bibliotecária Monick Raquel Silvestre da S. Portes, CRB4-1217

S888h Stosic, Darko

High performance Ginzburg-Landau simulations of superconductivity /

Darko Stosic. – 2018. 159 f.: il., fig.

Orientadora: Teresa Bernarda Ludermir.

Tese (Doutorado) – Universidade Federal de Pernambuco. CIn, Ciência da Computação, Recife, 2018.

Inclui referências e apêndice.

1. Inteligência artificial. 2. Supercondutividade. I. Ludermir, Teresa Bernarda (orientadora). II. Título.

006.3 CDD (23. ed.) UFPE- MEI 2018-106

(4)

Darko Stosic

“High-performance Ginzburg-Landau Simulations of Superconductivity”

Tese de Doutorado apresentada ao Programa de Pós-Graduação em Ciência da Computação da Universidade Federal de Pernambuco, como requisito parcial para a obtenção do título de Doutora em Ciência da Computação

Aprovado em: 02/05/2018.

______________________________________________

Orientadora: Profa. Dra. Teresa Bernarda Ludermir

BANCA EXAMINADORA

____________________________________________

Prof. Dr. Tsang Ing Ren Centro de Informática /UFPE

______________________________________________ Prof. Dr. Paulo Salgado Gomes de Mattos Neto

Centro de Informática / UFPE

_______________________________________________ Prof. Dr. Milan Lalic

Departamento de Física / UFS

_______________________________________________ Profa. Dra. Viviane Moraes de Oliveira

Departamento de Física / UFRPE

_______________________________________________ Prof. Dr. Pedro Hugo de Figueirêdo

(5)

I would like to express sincere gratitude to my advisers Prof. Teresa Ludermir and Prof. Milorad Miloševi´c for their continuous and unconditional support throughout my doctoral studies. I also gratefully acknowledge funding from the Fundação de Apoio à Cultura, Ensino, Pesquisa e Extensão de Alfenas (FACEPE, No. IBPG-0510-1.03/15) received towards the doctoral work.

(6)

Superconductivity is one of the most important discoveries of the last century. With many applications in physics, engineering, and technology, superconductors are crucial to our way of living. Several material and engineering issues however prevent their widespread usage in everyday life. Comprehensive studies are being directed at these materials and their properties to come up with new technologies that will ad-dress these challenges and enhance their superconductive capabilities. In this context, numerical modeling plays an important role in the search of new solutions to exist-ing material and engineerexist-ing issues. The time-dependent Ginzburg-Landau (TDGL) theory is a powerful predictive tool for modeling the macroscopic behavior of super-conductors. However most of the numerical algorithms developed so far are incapable of describing many basic properties of real superconducting devices, and are too slow on current hardware for large-scale numerical simulations necessary for their accurate description. Therefore, the purpose of this thesis is to develop high-performing nu-merical solutions that can correctly describe material features to be used as modeling tools of laboratory experiments. Some important innovations introduced in this work include the numerical modeling of nonrectangular geometrical shapes with complex electrical and insulating components, the inclusion of dynamic heating of the material, and the description of different types of material inhomogeneities. These encompass the principal features necessary for a complete description of the superconductive physics in real material samples. In this thesis a numerical solution is developed for modeling superconducting thin films and used to study the superconductive proper-ties of three experimental configurations: the dynamics of vortex matter in a Corbino disk, the motion of ultrafast vortices in an hourglass-shaped microbridge, and the photon detection process in a meander-patterned nanowire. Moreover, a numerical solution is developed for modeling three-dimensional superconductors which are studied here for the first time in the type-I superconducting regime. These numerical algorithms are optimized to exploit the computational horsepower of graphics pro-cessing units (GPUs) and multicore central-propro-cessing unit (CPU) clusters such that they can achieve high-performance and be used to model large-scale problems pre-viously impossible on conventional machines. Several computational tools are also designed to assist with the modeling of superconducting devices. These include a numerical library of the TDGL equations, a novel mechanism for the generation of complex geometries, a closed-form solver to conduct numerical simulations, and a graphics user interface (GUI) to visualize the dynamic behavior of superconductors. The contributions in this thesis ultimately push the boundaries on what is possible in state-of-the-art numerical modeling of superconductivity.

Keywords: Superconductivity. Time-dependent Ginzburg-Landau equations. High-performance computing.

(7)

A supercondutividade é uma das descobertas mais importantes do século pas-sado. Com muitas aplicações em física, engenharia e tecnologia, os supercondutores são cruciais para nosso modo de vida. Diversos problemas materiais e de engenharia, no entanto, ainda impedem seu uso amplo na vida cotidiana. Estudos abrangentes es-tão sendo direcionados a esses materiais e suas propriedades para desenvolver novas tecnologias que abordem esses desafios, e aprimorem suas capacidades supercondu-tivas. Neste contexto, modelagem numérica desempenha um papel importante na busca de novas soluções para existentes problemas materiais e questões de engen-haria. A teoria de Ginzburg-Landau dependente do tempo (TDGL) é uma poderosa ferramenta de previsão para modelar o comportamento macroscópico dos supercon-dutores. No entanto, a maioria dos algoritmos numéricos desenvolvidos até agora são incapazes de descrever muitas propriedades básicas de dispositivos supercondu-tores reais, e são lentos demais para simulações de grande escala no hardware atual, necessárias para sua descrição com precisão. Portanto, o objetivo desta tese é desen-volver soluções numéricas de alto desempenho que descrevam corretamente as carac-terísticas do material, à ser usadas como ferramentas de modelagem de experimentos de laboratório. Algumas inovações importantes introduzidas neste trabalho incluem a modelagem numérica de formas geométricas não retangulares com componentes elétricos e isolantes complexos, a inclusão de aquecimento dinâmico do material e a descrição de diferentes tipos de inhomogeneidade de materiais. Estes englobam as características principais necessárias para uma descrição completa da física super-condutora em amostras de materiais reais. Nesta tese é desenvolvida uma solução numérica para modelagem de filmes finos supercondutores e utilizada para estudar as propriedades supercondutoras de três configurações experimentais: a dinâmica da matéria em vórtice em um disco Corbino, o movimento de vórtices ultra-rápidos em uma microbridge em forma de ampulheta, e o processo de detecção de fótons em um nanofio com forma de meandro. Além disso, uma solução numérica é desenvolvida para modelar supercondutores tridimensionais, que aqui são estudados pela primeira vez no regime supercondutor tipo I. Esses algoritmos numéricos são otimizados para utilizar a potência computacional de unidades de processamento gráficos (GPUs) e clusters de unidades centrais de processamento (CPU) com múltiplos núcleos, de modo que possam atingir alto desempenho e ser usados para modelar problemas em larga escala que anteriormente não eram passíveis de modelagem em máquinas con-vencionais. Várias ferramentas computacionais também são desenvolvidas para aux-iliar na modelagem de dispositivos supercondutores. Estes incluem uma biblioteca numérica das equações TDGL, um novo mecanismo para a geração de geometrias complexas, uma solução computacional para conduzir simulações numéricas, e uma interface de usuário gráfico (GUI) para visualizar o comportamento dinâmico de su-percondutores. As contribuições nesta tese movem as fronteiras sobre o que é possível na modelagem numérica da supercondutividade.

Keywords: Supercondutividade. Equações de Ginzburg-Landau dependentes do tempo. Computação de alto desempenho.

(8)

Supergeleiding is een van de belangrijkste ontdekkingen van de vorige eeuw. Met veel toepassingen in de natuurkunde, techniek en technologie zijn supergeleiders cruciaal voor onze manier van leven. Verschillende materiële en technische problemen verhinderen echter hun wijdverspreide gebruik in het dagelijks leven. Uitgebreide stu-dies worden gericht op deze materialen en hun eigenschappen om met nieuwe tech-nologieën te komen die deze uitdagingen zullen aanpakken en hun supergeleidende mogelijkheden zullen verbeteren. In deze context speelt numerieke modellering een belangrijke rol bij het zoeken naar nieuwe oplossingen voor bestaande materiaal-en materiaal-engineeringkwesties. De tijdafhankelijke Ginzburg-Landau-theorie is emateriaal-en krach-tig voorspellend hulpmiddel voor het modelleren van het macroscopische gedrag van supergeleiders. De meeste van de tot nu toe ontwikkelde numerieke algoritmen zijn echter niet in staat om veel basiseigenschappen van echte supergeleidende apparaten te beschrijven en zijn te traag op de huidige hardware voor grootschalige numerieke simulaties die noodzakelijk zijn voor hun nauwkeurige beschrijving. Daarom is het doel van dit proefschrift om goed presterende numerieke oplossingen te ontwikke-len die materiaalkenmerken correct kunnen beschrijven die als modelleerhulpmid-delen voor laboratoriumexperimenten kunnen worden gebruikt. Enkele belangrijke innovaties die in dit werk zijn geïntroduceerd zijn de numerieke modellering van niet-rechthoekige geometrische vormen met complexe elektrische en isolerende com-ponenten, de opname van dynamische verwarming van het materiaal en de beschrij-ving van verschillende soorten inhomogeniteiten in het materiaal. Deze omvatten de belangrijkste kenmerken die nodig zijn voor een volledige beschrijving van de super-geleidende fysica in reële materiaalmonsters. In dit proefschrift wordt een numerieke oplossing ontwikkeld voor het modelleren van supergeleidende dunne films en ge-bruikt om de supergeleidende eigenschappen van drie experimentele configuraties te bestuderen: de dynamiek van vortex materie in een Corbino schijf, de beweging van ultrasnelle wervels in een zandloper-vormige microbrug en het foton detectieproces in een nanodraad met meanderpatroon. Bovendien is een numerieke oplossing ont-wikkeld voor het modelleren van driedimensionale supergeleiders die hier voor het eerst worden bestudeerd in het type-I supergeleidende regime. Deze numerieke algo-ritmen zijn geoptimaliseerd om de rekenkracht van GPU’s en multicore CPU-clusters te benutten, zodat ze hoge prestaties kunnen leveren en kunnen worden gebruikt voor het modelleren van grootschalige problemen die voorheen niet mogelijk waren op conventionele machines. Verschillende computerhulpmiddelen zijn ook ontwor-pen om te helontwor-pen bij het modelleren van supergeleidende apparaten. Deze omvatten een numerieke bibliotheek van de TDGL-vergelijkingen, een nieuw mechanisme voor het genereren van complexe geometrieën, een oplosser in gesloten vorm voor het uitvoeren van numerieke simulaties en een grafische gebruikersinterface om het dy-namische gedrag van supergeleiders te visualiseren. De bijdragen in dit proefschrift verleggen uiteindelijk de grenzen van wat mogelijk is in state-of-the-art numerieke modellering van supergeleiding.

Keywords: Supergeleiding. Tijdafhankelijke Ginzburg-Landau vergelijkingen. Hoge performantie computergebruik.

(9)

1D One-dimensional 2D Two-dimensional 3D Three-dimensional AFM Atomic force microscopy ALU Arithmetic logic unit

API Application programming interface AVX Advanced vector extensions

BCS Bardeen-Cooper-Schrieffer CPU Central Processing Unit

CUDA Computed Unified Device Architecture CUFFT CUDA Fast Fourier Transform

DCT Discrete Cosine Transform DFT Discrete Fourier Transform DP Double precision

DRAM Dynamic random-access memory DST Discrete Sine Transform

FDM Finite difference methods FFT Fast Fourier Transform

FLOPS Floating point operations per second FPU Floating-point unit

GL Ginzburg-Landau

GPU Graphics Processing Unit

gTDGL Generalized time-dependent Ginzburg-Landau GUI Graphics User Interface

IV Current-voltage IPC Instructions per cycle

(10)

MKL Math Kernel Library MPI Message passing interface NBN Niobium nitride

OPENMP Open Multi-Processing

PB Lead

PCIE Peripheral component interconnect express PDE Partial differential equations

RB Rigid-body

ROM Read-only memory

SEM Scanning electron microscopy SF Superconductor-ferromagnet SFU Special function unit

SI Superconductor-insulator SIMD Single instruction multiple data SIMT Single instruction multiple thread SM Streaming multiprocessor

SN Superconductor-normal metal SOR Successive over-relaxation SP Single precision

SS Superconductor-superconductor SSE Streaming SIMD extensions

SSPD Superconducting single photon detector SV Superconductor-vacuum

TDGL Time-dependent Ginzburg-Landau TLB Translation look-aside buffer

(11)

1 MOTIVATION . . . . 13

1.1 Motivation for this work . . . 13

1.2 Contributions of this thesis . . . 14

1.3 Organization of this thesis . . . 17

2 INTRODUCTION . . . . 18

2.1 History of Superconductivity . . . 18

2.2 History of Computing . . . 24

2.3 Numerical Approaches for the Ginzburg-Landau equations. . . 31

2.4 Supercomputing clusters . . . 33

3 THE GINZBURG-LANDAU THEORY . . . . 35

3.1 Free energy expansion . . . 35

3.1.1 Landau free energy . . . 35

3.1.2 Kinetic energy . . . 36

3.1.3 Magnetic energy . . . 37

3.2 The Ginzburg-Landau equations . . . 37

3.2.1 First Ginzburg-Landau equation . . . 37

3.2.2 Second Ginzburg-Landau equation . . . 39

3.2.3 Dimensionless equations . . . 40

3.2.4 Time-dependent Ginzburg-Landau equations . . . 41

3.3 Characteristic length scales . . . 42

3.3.1 Coherence length . . . 42

3.3.2 Penetration depth . . . 42

3.3.3 Temperature dependence . . . 43

3.3.4 Ginzburg-Landau parameter . . . 43

3.4 Gauge transformations . . . 44

3.4.1 Zero-electrostatic potential gauge . . . 45

3.4.2 Coulomb gauge . . . 45

3.4.3 Linear combination gauge . . . 45

3.5 Validity of the theory . . . 46

3.6 Types of superconductors . . . 47

3.6.1 Type-I and Type-II superconductors . . . 47

3.6.2 Mesoscopic superconductors . . . 50

3.6.3 Superconducting thin films . . . 52

4 ELECTRIC MODEL IN TWO DIMENSIONS . . . . 54

4.1 Theoretical model . . . 54

4.2 Discretization of the equations . . . 56

4.2.1 Finite-differences . . . 56

4.2.2 Generalized time-dependent Ginzburg-Landau equation . . . 56

(12)

4.3 Numerical methods . . . 61

4.3.1 Solving scalar potential with two-dimensional Fourier transforms . . 61

4.3.2 Successive over-relaxation . . . 63

4.3.3 Time evolution with Euler method . . . 64

4.4 Computational techniques . . . 67

4.4.1 Vectorization . . . 67

4.4.2 Multithreading . . . 69

4.4.3 Distributed parallelism . . . 71

4.4.4 Intel MKL Poisson library . . . 73

4.5 Performance and scalability . . . 74

4.5.1 Register performance . . . 75

4.5.2 Multicore scaling . . . 75

4.5.3 Multinode scaling . . . 76

4.5.4 Simulation execution . . . 76

4.6 Simulations of vortex matter in exotic geometries . . . 77

4.6.1 Corbino Disk . . . 78

4.6.2 Ultrafast vortex dynamics . . . 82

4.6.3 Superconducting single photon detector . . . 85

5 MAGNETIC MODEL IN THREE DIMENSIONS . . . . 91

5.1 Theoretical model . . . 91

5.2 Discretization of the equations . . . 92

5.2.1 Finite-differences . . . 92

5.2.2 Time-dependent Ginzburg-Landau equation . . . 93

5.2.3 Poisson equation for vector potential . . . 94

5.2.4 Discrete boundary conditions . . . 95

5.3 Numerical methods . . . 95

5.3.1 Solving vector potential with three-dimensional Fourier transforms . 95 5.3.2 Time evolution with Euler method . . . 96

5.4 Computational techniques . . . 97

5.4.1 GPUs in finite differences . . . 97

5.4.2 GPU computing and architecture . . . 98

5.4.3 Algorithm . . . 100

5.4.4 Coalesced memory accesses . . . 101

5.4.5 Registers and instruction level parallelism . . . 102

5.4.6 Special Function Units . . . 104

5.4.7 Branch divergence . . . 104

5.4.8 CUDA Fourier Transform Library . . . 105

5.5 Performance benchmarks . . . 107

5.5.1 Run configuration . . . 107

5.5.2 Profiling metrics . . . 107

5.5.3 Kernel performance . . . 109

5.5.4 Convergence criteria . . . 110

(13)

5.6.1 Simulation configuration . . . 113 5.6.2 Time evolution . . . 113 5.6.3 Energy diagram . . . 114 6 COMPUTATIONAL TOOLS . . . 117 6.1 Numerical Library . . . 117 6.1.1 Overview . . . 117 6.1.2 Parameter structure . . . 118 6.1.2.1 Material parameters . . . 118 6.1.2.2 Dimension parameters . . . 118 6.1.2.3 Simulation parameters . . . 119 6.1.2.4 Convergence parameters . . . 120 6.1.3 Data layout . . . 120 6.1.4 Macro operators . . . 121 6.1.4.1 Memory operators. . . 121 6.1.4.2 Mathematical operators . . . 123 6.1.4.3 Example . . . 126 6.1.5 Routines . . . 128 6.1.5.1 Solver routines . . . 128 6.1.5.2 Auxiliary routines . . . 129 6.1.5.3 Mesh routines . . . 130 6.1.5.4 API routines . . . 130 6.1.6 Computer program . . . 131

6.2 A mechanism for generating meshes . . . 134

6.2.1 Superconducting map . . . 134 6.2.2 Mesh generator . . . 135 6.3 Closed-form solver . . . 137 6.3.1 Interface . . . 137 6.3.2 Material parameters . . . 138 6.3.3 Numerical algorithms . . . 138

6.3.4 Field and current sweeps . . . 139

6.3.5 Load and store data . . . 139

6.4 Graphics User Interface . . . 140

6.4.1 Motivation . . . 140

6.4.2 Interface . . . 140

7 SUMMARY AND OUTLOOK . . . 143

7.1 Summary . . . 143

7.2 Outlook . . . 144

REFERENCES . . . 146

(14)

1 MOTIVATION

1.1

Motivation for this work

Superconductivity is one of the most important discoveries of the last century. With many applications in physics, engineering, and technology, superconductors are cru-cial to our way of living. They are widely used in small-scale applications as electronic components in digital and quantum circuitry, as sensitive magnetic field detectors in medicine and geological surveying, as photon detectors in optics quantum informa-tion applicainforma-tions, and as signal amplifiers in telecommunicainforma-tions. Superconductors are also used in large-scale applications as magnets in maglev trains, magnetic reso-nance imaging, and particle accelerators and as energy devices for power transmission cables, transformers, fault limiters, and generators. Several material and engineering issues however lower the electromagnetic fields and currents as well as the temper-atures at which superconductors can operate and thus restrict their widespread use in everyday life. Some of these issues include the emergence of current crowding in nonregular geometries, the dissipation of heat induced from the motion of quantized magnetic flux, and the complex behavior of the condensate caused from material defects and inhomogeneities. Scientists are comprehensively studying these materi-als and their properties to come up with new technologies that will address these challenges and enhance their superconductive capabilities. In this respect, numerical modeling plays an integral role in the search of new solutions to existing material and engineering issues. These often function in complement to laboratory experiments as they provide unique information that is otherwise inaccessible by experiments alone. In addition numerical models can serve as precursors to technological inno-vations where they are used to simulate candidate devices at much lower costs and with a higher degree of adjustability. Future advances in superconducting technology will heavily rely on the ability to accurately describe experiments through numerical models. The time-dependent Ginzburg-Landau theory emerges as one of the most powerful predictive tools for modeling the macroscopic behavior of superconductors. The theory has successfully reproduced many superconductive phenomena found in experiments and remains crucial to their understanding. However most of the numer-ical algorithms developed so far are incapable of modeling many properties found in real superconducting devices such as as dynamic heating, material inhomogeneities, and even various geometrical components. In addition the numerical challenges that arise from solving the time-dependent GL equations prevents the modeling of large-scale systems necessary for a correct description of most technological applications of superconductors. The purpose of this thesis is then to develop high-performing numerical solutions that can correctly describe these features and be used as modeling tools of laboratory experiments.

(15)

1.2

Contributions of this thesis

In this thesis we develop numerical solutions for solving the time-dependent GL equations. These solutions can correctly describe all relevant properties of super-conductors used in most laboratory experiments. We also optimize these for high-performance on highly parallel environments and conduct exemplary simulations on two supercomputing clusters. The contributions in this thesis push the boundaries on what is used in state-of-the-art numerical modeling of superconductivity and are summarized into the following categories:

I. Exotic geometries: most superconducting components of technological de-vices are composed of exotic geometries. Some examples include meander pat-terned nanowires used in photon detectors and ring shaped wires used in su-perconducting quantum interference devices. The shape of the superconductor however has considerable impacts on its physical properties. Superconducting materials with nonregular shapes usually impose constrictions on the current density distribution causing current lines to accumulate in certain regions of the sample – an effect known as current crowding. While a nonhomogeneous distribution provides a rich dynamic behavior of the condensate, the buildup of current at bends and edges of the superconductor can cause deterioration of the superconducting material and hamper its ability to carry an electric current without resistance. Similar effects can be caused from the geometry of the elec-trical and insulating components of the device, such as metal contacts and defect voids. The numerical modeling of nonrectangular geometries however has been scarcely explored and only accomplished in very few instances in the past. This is in part because of the complexity involved in setting appropriate geometry-related conditions necessary in a correct description of nonregular boundaries. In this thesis we develop for the first time a numerical approach that can model any geo-metrical shape and with arbitrarily complex electrical and insulating components. This approach provides the means to study the many exotic configurations used in technological applications of superconductors.

II. Dynamic heating: superconducting materials used in technological appli-cations experience heating in a number of ways. The crowding of current in-duced by an inhomogeneous distribution can lead to localized overheating and the formation of thermal hotspots. Such phenomenon is common in devices with nonregular geometries and electrical components. The motion of quantized magnetic flux dissipates energy which generates heat in superconducting mate-rials. External sources can also induce heating: metal contacts heat the sample through the injection of electric current whereas photon collisions induce ther-mal hotspots in the material during the photon detection process. Since heat-ing can raise the superconductor’s temperature above its critical value (beyond which it transitions to a normal state) it restricts the currents and temperatures at which superconductors can operate and limits their applications. Understand-ing the properties and causes of heatUnderstand-ing is then increasUnderstand-ingly important in many technological applications of superconductors. The dynamics of heating how-ever have not yet been explored in literature. This is in part due to the

(16)

numeri-cal efforts involved in solving the heat equation and because of the difficulty in defining the heat coefficients that describe the thermal properties of the mate-rial. In this thesis we couple the heat-balance equation into our numerical model and for the first time study the heating properties of several real superconducting systems. This approach provides the opportunity for new and exciting studies in the relatively unexplored field of the dynamic heating of superconductors.

III. Material inhomogeneities: superconducting materials used in real appli-cations rarely come in pristine form. These materials are usually filled with inhomogeneities and impurities that can have substantial impacts on their su-perconducting properties. Some common types of inhomogeneities are precip-itates, point defects, grain boundaries, dislocations, stacking faults, and strain fields. Since material defects can serve as pinning centers that suppress the mo-tion of quantized magnetic vortices, they play a paramount role in the search of microstructure defect configurations that can attain high non-dissipative cur-rents over a wide range of magnetic fields. This property is fundamental in power applications where superconductors must be able to carry high currents without losses. While numerous studies have been conducted in recent years in the search of the ideal pinning landscape, most focus only on pinning induced from the critical temperature variations of the material. However material in-homogeneities can emerge from a number of sources: heavy ions, chemically grown defects, nanostructured perforations, and permanent nanomagnets. For a complete description of the pinning landscape, in this thesis we model material defects for the first time through: pair-breaking scatterings that suppress the critical temper-ature, non-pair breaking scatterings that cause variations in the mean free path, and physical vacuum vacancies that represent geometrical holes in the material. This gen-eral model for different types of material defects provides unprecedented tools necessary in the search of defect microstructure landscapes that can attain high non-dissipative currents in an otherwise disordered and complex superconduct-ing system with extremely rich and nontrivial dynamics of vortex matter.

IV. Superconducting thin films: thin superconducting films are crucial in many technological applications of superconductors. They a play major role in the manufacturing of new devices that can operate under much higher temper-atures than conventional superconductors and with much lower refrigeration costs. Since superconducting films are characterized by a small thickness, they exhibit properties of type-II superconductors where the magnetic field com-pletely penetrates the sample. This allows such samples to be modeled using two-dimensional systems. The manipulation of magnetic vortices has also be-come an interesting topic in digital information. However most numerical simu-lations fail to account for all the physical properties of superconductors. Several features previously impossible to describe and that are necessary for a correct description of real superconducting devices are now possible using the numer-ical methods developed in this thesis. In this thesis we study for the first time the superconductive properties of three experimental configurations: the dynamics of vor-tex matter in a Corbino disk, the motion of ultrafast vortices in an hourglass-shaped microbridge, and the photon detection process in a meander-patterned nanowire.

(17)

V. Three-dimensional superconductors:while much attention has been geared towards the study of superconducting thin films, materials have a finite mea-surable thickness in most applications. These exhibit important properties that cannot be captured through two-dimensional models. Vortex lines interact with each other and material defects in superconducting slabs to achieve higher crit-ical currents. In addition the thickness of the sample controls the superconduct-ing type of the material. The crossover between superconductsuperconduct-ing types provides many interesting behavior of vortex matter which are yet to be explored. So far numerical models however have focused mostly on two-dimensional sys-tems, and the few developed for three-dimensional ones can only model type-II superconductors with weak magnetic responses. This is in part because of the numerical complexity involved in simulating the complicated flux-patterns and long-range magnetic responses present in type-I superconductors. In this thesis we develop a numerical solution for modeling three-dimensional superconductors and study their vortex configurations for the first time in the challenging type-I supercon-ducting regime.

VI. Large-scale models: the numerical challenges that arise from solving the time-dependent GL equation prevent the modeling of superconducting mate-rials at scales necessary for a correct description of most real superconduct-ing devices. Such devices are characterized by complex geometries and compo-nents that require many grid points for precise mapping. Simulations of large-scale models however are too time-consuming and can take between weeks and months to complete. With the emergence of parallel computing, large-scale prob-lems can be broken down into smaller and simpler components that are less ex-pensive to compute. One of the major contributions in this thesis is the development of highly parallel numerical algorithms that can exploit the computational horsepower of GPUs and multicore CPU clusters and model large-scale problems previously impossi-ble on conventional machines. Such approach allows the study of superconducting devices at physical scales relevant in most technological applications. Within the scope of this thesis, two supercomputers (the Tier-1 and SDumont) are used to conduct numerical simulations of large-scale systems.

VII. Computational tools: the numerical modeling of superconductors is very challenging for a number of reasons. Engineering a fast and efficient code for the numerical algorithms developed in this thesis is nontrivial due to the dif-ficulty in using low-level programming languages (such as C) that are more efficient and optimizations that otherwise require a deep understanding on the hardware architecture. In addition the generation of geometries necessary for a correct description of most superconducting devices becomes very burdensome when considering large and complex samples with many components. Lastly the visualization of the dynamic behavior of superconductors is expensive and impractical for large numerical systems. As such, most of the algorithms devel-oped in this thesis become inaccessible without a simpler mechanism towards using them. In this thesis we develop several computational tools to assist with the modeling of superconducting devices: the first numerical library of the TDGL equations, a novel mechanism for the generation of complex geometries, a closed-form solver for

(18)

nu-merical simulations, and a graphics user interface to visualize the dynamic behavior of superconductors. This collection of computational tools provide a very powerful mechanism for the modeling of superconductivity.

1.3

Organization of this thesis

The thesis is organized as follows:

Chapter 2 presents a brief overview of the key historic events related to super-conductivity and computing. The various numerical approaches for solving the Ginzburg-Landau equations are described as well as recent efforts on optimizing them for modern architectures.

Chapter 3introduces theoretical concepts related to the Ginzburg-Landau model. Derivations are shown for its equations and characteristic lengths. The validity of the theory is also discussed in depth. Gauge transformations are shown for obtaining the equations used in each model. The different types of supercon-ductors studied throughout the thesis and their properties are also discussed.

Chapter 4 presents a numerical solution for modeling two-dimensional super-conducting films. The numerical and computational procedures are discussed in detail as well as their performance on a cluster of multicore CPU machines. Simulations are conducted for three experimental configurations previously un-studied in literature: the dynamics of vortex matter is un-studied in a Corbino disk, the motion of ultrafast vortices in an hourglass-shaped microbridge, and the photon detection process in a meander nanowire. Several physical features pre-viously impossible to describe are modeled including complex geometries, dy-namic heating, and material inhomogeneities. Large-scale simulations of these models are conducted on two state-of-the-art supercomputers. The results of this Chapter are prepared for publication in Ref. [1].

Chapter 5presents a numerical solution for modeling three-dimensional super-conductors. The numerical and computational procedures are discussed in de-tail. Hardware-specific optimizations are also discussed and their performance on Graphics Processing Units is analyzed. Simulations are conducted for a large superconducting cube and its vortex configurations are studied for two super-conducting regimes. The results of this Chapter are published in Ref. [2].

Chapter 6 presents a collection of computational tools to assist with the model-ing of superconductmodel-ing devices. A numerical library is designed to facilitate the use of the numerical algorithms developed in this thesis. Also a new mechanism is introduced for the generation of complex geometries used to model supercon-ducting devices and their many physical components. A closed-form solver is described for numerical simulations of the library through controllable param-eters. Finally a graphics user interface is presented to visualize the dynamic behavior of superconductors and their response to changing parameters.

Chapter 7 summarizes the work presented in this thesis and discusses future outlook.

(19)

2 INTRODUCTION

2.1

History of Superconductivity

Superconductivity was discovered in 1911 by the Dutch physicist Heike Kamer-lingh Onnes at the Leiden Physics Laboratory in Netherlands [3]. Before the discovery of superconductivity, it was widely known that cooling a metal increases its conduc-tivity due to electron-phonon interactions. However questions remained on what hap-pens at low temperatures close to the absolute zero. Several speculations circulated in the physics world: Drude and Lorentz expected a steady decrease towards zero resistance, Lord Kelvin predicted an infinite resistance and others believed in a min-imal but finite residual resistivity. Onnes and his coworkers sought to answer these questions by cooling various pure metals to temperatures of only a few Kelvin using a refrigeration technique based on liquid helium [4]. Surprisingly they observed that the resistance of mercury drops abruptly to zero at 4.2K (see Fig. 2.1) rather than a gradual decrease. Subsequent experiments found similar behavior for other materials (lead and tin) at different temperatures. These observations meant that below a critical temperature the current experiences virtually no resistance, e.g. with very low decay rates with observed lifetime ranging up to thousands of years in certain materials [5]. The property of perfect conductance marked the first fundamental characteristic used to describe a new phenomenon called superconductivity.

Figure 2.1 – Resistance of mercury (left) and platinum (right) as function of temper-ature. The resistance of mercury follows the path of a normal metal and then abruptly drops to nearly zero at the critical temperature Tc, which

is 4.2K for mercury. In contrast, the resistance of platinum decreases con-tinuously and shows a finite resistance R0even at very low temperatures.

Figure adapted from [6] and the original data was published by Kamer-lingh Onnes.

(20)

For many years after its discovery superconductivity remained largely unexplained. This was in part caused by an incomplete picture that a superconductor was sim-ply a perfect conductor. The real breakthrough came in 1933 when Meissner and Ochsenfeld studied the current flow in a superconducting cylinder [7] and discovered a second characteristic feature of superconductors: when cooled under the critical temperature a superconductor completely shields any magnetic field and prevents it from penetrating the material (see Fig. 2.2). This perfect diamagnetic effect is indepen-dent of the history of cooling (zero-field cooling or field-cooling) in contrast to perfect conductors which must maintain a constant magnetic flux over time, e.g. an applied magnetic field remains in the conductor when cooled down. The source of diamag-netic behavior in superconductors is now well understood as the flow of internal currents which generate a magnetic field inside a superconductor equal in magni-tude to the applied field but with opposite direction such to cancel out the total field. Superconductors can remain in a state of perfect diamagnetism only up to a certain applied field, above which the magnetic flux penetrates the material and suppresses superconductivity.

Figure 2.2 – Effect of an external magnetic field on an ordinary conductor (left) and superconductor (right). The external field penetrates the ordinary con-ductor and is excluded from the superconcon-ductor due to the Meissner-Oschsenfeld effect.

The discovery of the Meissner-Ochsenfeld effect was a crucial turning point in the understanding of superconductivity. It paved way for the first theoretical description of superconductivity by the London brothers in 1935 [8]. Starting from the Drude-Lorentz equation of motion for electrons in a metal, they derived an equation (now known as the London equation) that describes the electromagnetic field in a bulk superconductor by relating the current and the magnetic field through the Maxwell equations. Their theory predicted the existence of a length scale (the London pene-tration depth) over which the magnetic field can penetrate into a superconductor at an exponential decay. Pippard subsequently developed the nonlocal generalization of the London theory in 1953 and introduced a coherence length over which the

(21)

super-current can vary [9]. These theories were successful in accounting for both the zero resistance and the absolute diamagnetism of superconducting materials.

The phenomenological theories developed by London and Pippard however were unable to describe several fundamental properties of superconductors. In particular they did not address the spatial distribution of the carriers of superconductivity nor explain their depletion to zero under the nonlinear effects of fields at the critical tem-perature. Since known characteristics of superconducting materials pointed towards a macroscopic manifestation of quantum effects occurring on a microscopic scale as the origin of superconductivity, the development of a new theory which would take into account these quantum effects and help address some of the issues that the London and Pippard theories could not was crucially important. Encouraged by the success of quantum mechanics in the explanation of metallic behavior, many prominent theorists of the time started to apply quantum mechanical tools to explain superconductivity leading to two predominant theories [10]. The first theory spontaneously developed by Block, Landau, and Frenkel was based on the notion of a current-bearing equi-librium ground state, where finite spontaneous currents were favored in a supercon-ducting state below the critical temperature and current-free equilibrium states at higher temperatures. The second theory was an electron-lattice theory developed by Bohr and Kronig who postulated that superconductivity emerges from the quantum motion of a lattice of electrons. Despite making crucial advances in linking supercon-ductivity to its quantum mechanical principles, these two theories failed to fully agree with existing experiments.

In 1950 Vitaly Lazarevich Ginzburg and Lev Landau proposed a new phenomeno-logical theory of superconductivity [11] that was later proven to be one of the most powerful predictive theories in physics. Their theory was an extension of the Lon-don theory on superconducting electrodynamics and sought to correct the negative surface energy problem in the London equations. Ginzburg and Landau associated a wave function to describe the superconducting electrons which were considered to be the effective carriers of superconductivity. By using the Landau theory for second order phase transitions, they expanded an energy functional in terms of a complex order parameter used to represent the wave function in the proximity of the critical temperature and derived a pair of equations: one for describing the spatial variations of the order parameter and another describing the magnetic field. The theory pre-dicted two characteristic lengths (the coherence length and penetration depth) that describe the scales over which the order parameter and magnetic field can vary over space. A major significance of the theory was its ability to describe the destruction of superconductivity by temperature and magnetic field which paved way for numerous studies in the transition properties of superconductors.

Still in the 1950s Alexei Abrikosov, a student of Landau, made one of the most important discoveries in superconductivity. Through studying the Ginzburg-Landau equations under the condition that the penetration depth is larger than the coher-ence length, Abrikosov observed an intermediate state in which superconducting and normal domains can coexist at the thermodynamic critical field [12]. Abrikosov later realized that the normal state penetrates the superconductor in a manner of quan-tized flux lines, which repel each other at all distances due to the negative surface energy and organize into a regular triangular structure called the Abrikosov lattice

(22)

(see Fig. 2.3). Therefore, he proposed a new class of superconducting materials (now called type-II superconductors) that can have negative energy associated to a normal metal/superconductor boundary under a magnetic flux which penetrates the material in the form of vortices (small whirlpools of superconducting electrons with a normal core in the center) as opposed to the type I superconductors with positive surface en-ergy where flux penetrates in the form of macroscopic normal domains, which were the only known superconducting materials at that time. The ability to distinguish between type-I and type-II superconductors made the Ginzburg-Landau theory the preferred mechanism in the study of superconductivity.

Figure 2.3 – Abrikosov lattice present (a) in a Pb film (first direct observation of in-dividual flux lines [13]), (b) in a MgB2 film (obtained through scanning

tunneling spectroscopy imaging of the lattice [14]), and (c) in a computer simulation of the GL equations using the numerical algorithms developed in this thesis.

Despite considerable progress in understanding the processes behind supercon-ductive materials, their underlying microscopic mechanism was still unclear. In 1950 Herbert Frölich made the first steps towards the formulation of a microscopic the-ory when he postulated that superconductivity is produced through the strong inter-actions between conducting electrons mediated by lattice vibrations (phonons) that could bind them [15]. Subsequent experiments confirmed his prediction and found that the transition temperature depends on the mass of the atoms of the supercon-ducting material defined as the isotope effect [16]. Leon Cooper took a step further in 1956 when he showed that the Fermi-sea ground state of the electron gas was unstable even in the presence of a very weak attractive electron-phonon interaction [17]. Being aware of the existence of a band gap, the similarity to superfluidity, the isotope ef-fect and experimental results for the flux quantum suggesting two involved electrons, Bardeen, Cooper, and Schrieffer constructed in 1957 a unified microscopic theory of superconductivity with great predictive power [18]. The BCS theory described super-conductivity as a coherent sea of Cooper pairs that move collectively and unperturbed through the crystal lattice. These Cooper pairs were pairs of electrons bound by at-tractive electron-phonon interactions and occupied states with equal and opposite momentum and spin near the Fermi surface. Being bosons, the Cooper-pairs can

(23)

con-dense into the same ground state and form a superconducting condensate which can be described by a macroscopic wave function. Since the Cooper pairs can not absorb energies smaller than the Cooper pair binding energy, the theory confirmed the gap in the energy spectrum that was previously observed [19–21] between the ground state and the lowest-lying excited states.

In 1959 Lev Gor’kov, another student of Landau, revitalized interest in the Ginzburg-Landau theory within the physics community. Gor’kov managed to prove that very close to the critical temperature (at which a superconducting material becomes nor-mal) the phenomenological theory devised by Ginzburg and Landau can be rig-orously derived from the microscopic BCS theory of superconductivity [22]. This not only validated the Ginzburg-Landau theory previously regarded merely as phe-nomenological, but also provided a relationship between the phenomenological coef-ficients of the theory and the microscopic parameters of a superconducting material such as its Fermi velocity and density of states.

The breakthrough in the understanding of the quantum mechanical basis of su-perconductivity led to further progress in superconducting circuits and components. Brian Josephson, a graduate student at Cambridge University supervised by Sir Al-fred Brian Pippard, predicted in 1962 that a superconducting quantum liquid should be able to leak through a barrier, such as a thin layer of normal conducting material, between two superconducting samples [23]. This process described the quantum tun-neling of superconducting electrons. He also found that the quantum phase, which determines the transport properties, is an oscillating function of the voltage applied over this kind of junction. The Josephson effect has important applications in precision measurements, since it establishes a relation between voltage and frequency scales.

Between the 1950s and 1980s many different superconducting materials were dis-covered even though mostly with low critical temperatures. Because all of these ma-terials had common features (e.g. the isotope effect, existence of an energy gap, the Cooper-pair creation stemming from the electron-phonon interaction, and pairing of s-wave symmetry) that could be described through the microscopic BCS theory they were named conventional superconductors. At the time it was widely believed that superconductivity would remain in the helium temperatures which were accessible only to the rather rare and expensive liquefied helium gas. Moreover, the microscopic BCS model predicted a theoretical limit between 30K and 40K for the critical tem-perature at which the pair-breaking of Cooper pairs occurs. This was quite a setback since it locked the maximal working temperature for superconductors far below the room temperature. Research stalled as most of the scientific community considered the topic resolved without any room for future progress.

In 1986 Karl Müller and Johannes Bednorz discovered a ceramic compound of lanthanum and copper oxide doped with barium (LaBaCuO) with a critical tempera-ture of 35K [24], much higher than any previously found superconducting material. Shortly afterward, several cuprate compounds were synthesized with a dramatic in-crease in critical temperature: YBCO appeared to be superconducting with tempera-tures below 92K [25] and BSCCO with temperatempera-tures below 107K [26]. Discovery of this new class of copper oxide-based ceramic materials with temperatures that greatly ex-ceed the pair-breaking temperature predicted by the microscopic theory (see Fig. 2.4 for an evolution of high-temperature superconductors) drew much attention in the

(24)

scientific community. While Cooper pairing was still the mechanism for superconduc-tivity in these materials, it was clear that conventional BCS theory could not account for the unusually high critical temperatures and a search began for alternate expla-nations. Despite various proposed theories to explain the electron pairing in these materials, scientists are still struggling to understand what mechanism yields their very high critical temperatures. High-temperature superconductors however offered not only new horizons in transition temperature but also plenty of interesting phe-nomena like d-wave pairing symmetry [27,28], pseudo-gap [29], charge stripes [30,31], and exotic pairing mechanisms [32–34], for which they were named unconventional superconductors.

Figure 2.4 – History of superconductors between 1900 and 2015. The observed super-conducting transition temperature (Tc) is shown for a variety of classes of

superconductors (red for conventional, green for high-Tc, blue for

heavy-fermion, yellow for fullerens and orange for carbon-based) as a function of time. Here GPa stands for the unit of pressure measured in gigapascals. Data based on [35].

Because high-temperature superconductors allowed for the use of a coolant much less expensive than helium (nitrogen can be liquified at temperatures that exceed 77K), scientists began using them in a variety of practical applications in science and technology. Nowadays superconductors are widely used in small-scale applications as electronic components or devices (detecting neural activity in the brain, in mobile phone stations as bandpass filters, fast alternatives in digital circuitry, and the build-ing blocks to quantum computbuild-ing) and large-scale applications as magnets (used in maglev trains, magnetic resonance imaging, and particle accelerators) and as energy devices (used in power transmission cables, transformers, fault limiters, and genera-tors). Despite considerable progress, the usability of superconductors in a variety of applications is still limited by the many material and engineering issues (material im-perfections, overheating, energy loss) that hamper their superconductive properties. Scientists are comprehensively studying these materials and their properties to come

(25)

up with new techniques that will enhance their superconductive capabilities. This is where computational modeling (and in particular the Ginzburg-Landau model) come into play and work hand-in-hand with experiments in the search of new solutions.

2.2

History of Computing

While the origins of computing date back to 2700-2300 BC with the invention of the abacus, the main concepts of modern computing can be traced to a much shorter period. Charles Babbage is often considered to be the "father of computing" with his invention of the analytical engine in 1837, a hand-cranked mechanical digital de-vice that marked the first general-purpose computer [36]. The engine was designed to carry out complex arithmetic operations on a mill arranged into pegs and rotat-ing drums and store a total of 1000 numbers each with 40 decimals (equivalent to 16.609 kilobytes). It could select from alternative actions based upon the outcome of its previous actions and was controlled by a program of instructions contained on punched cards connected together with ribbons [37]. Despite having never been built, the design of this device (see Fig. 2.5) anticipated virtually every aspect of present-day computers: a central processing unit with arithmetic logic operations, an integrated memory, a basic control flow in the form of conditional branching, and a set of repro-grammable instructions. Only nearly a century later was the first functional general purpose digital computer actually created with all of these features.

Between the invention of the analytical engine in the 1830s and up to the cre-ation of the first digital computer in the late 1930s, analog computers were

consid-Figure 2.5 – Schematic design for Babbage’s analytical engine is shown on the left. The structure arranged around large circuital wheels is the mill (CPU) and the store (memory) extends off the sheet. On the right a page from Babbage’s 1827 Table of Logarithms is shown to highlight the complexity required in a production table. Source is the Science Museum (London).

(26)

ered by many to be the future of computing. These were mechanical devices that used changeable aspects of physics phenomena (such as electrical, mechanical, and hydraulic quantities) to model specific problems. Notable examples include the tide-predicting machine invented by Sir William Thomson in 1872 based on a system of pulleys and wires, the differential analyzer devised by James Thomson in 1876 for solving differential equations using a wheel-and-disc mechanism, and the bombsight machine developed by Harry Wimperis in 1916 for calculating the trajectory of bombs through wind speed. The key property of these analog devices was that they operated based on continuous values (temperature, pressure, voltage) rather than the discrete Boolean representation (of 0s and 1s) used in their digital counterparts. Modern ex-amples of popular analog devices are neural networks whose elementary operations are based on weights that reside in a continuous space. Because analog computers were not very flexible, were error-prone to external influences and changes in ambi-ent temperature and pressure, and non-deterministic, they quickly fell out of flavor with the introduction of digital computers.

The principle of modern computing was first described in 1936 by computer sci-entist Alan Turing in his seminal paper "On Computable Numbers" [38]. Turing was interested in the question of what it means for a task to be computable and devised a mathematical paradigm to explain it. He believed that any arbitrarily complex task that could be decomposed into a sequence of much simpler instructions which re-sult in the completion of said task was computable. In order to prove this paradigm he introduced the abstract notion of a digital computing machine, later known as the Turing machine. This machine was based on an infinite one-dimensional tape of discrete cells with unique symbols (0, 1, or ’blank’) that could be manipulated ac-cording to a predefined table of rules (see Fig. 2.6). The sequence of rules (or rather instructions) that a machine operates on comprises a procedure or algorithm for the specific task. Given an infinite amount of space and time, the Turing machine could theoretically be adapted to solve any problem. It followed then that any machine or

Figure 2.6 – Concept of a Turing machine. An infinite memory tape divided into dis-crete cells and attached with a read/write head that performs two basic operations: shifts the tape to the right and prints a value of 0 on the tape.

(27)

instruction set that was Turing complete (or rather had the same properties as a Tur-ing machine) could also solve any computable problem. The revolutionary idea of an instruction-based machine set the standard for modern day computing.

The Turing machine however relied on a discrete representation that could not be used by analog machines which were continuous by nature. While Turing laid down the abstract concept for digital computers another mathematician by the name of Claude Shannon formulated its physical realization. In 1937, Claude Shannon showed in his masters thesis "A symbolic analysis of relay and switching circuits" [39] that log-ical algebra and binary arithmetic of 19th century mathematician George Boole could be used to simplify the arrangement of electromechanical relays at the time mostly used in telephone routing switches. Then he expanded this concept, proving that these circuits could also in retrospect be used to solve all problems within Boolean alge-bra. Shannon showed that basic circuits connected in parallel and in series reduced to simple Boolean operations (see Fig. 2.7). Exploiting this property of electrical switches to do logic became the basic concept that underlies all electronic digital computers. Shannon’s work then became the foundation of practical digital circuit design (such as logic gates) which is the principal component of digital computers.

Figure 2.7 – Schematic ofAND (multiplication) and OR (addition) logic gates. Top shows the circuits connected in series and parallel. Bottom shows the symbol used to denote each gate and its respective operation.

Digital computers as envisioned by Turing stored numbers in a discrete represen-tation of digits, most commonly binary bits (0s and 1s). This was in stark contrast to their analog counterparts which relied on continuous values. Most early digital com-puters used electrical switches to store numbers: when a switch was off it stored a value of 0 and when it was on a value of 1. Hundreds of thousands of switches were then used to store a number which was made of many binary bits. The earliest digi-tal switch was the electromechanical relay, a physical switch consisting of a solenoid with mechanical contact points that closed when electricity animated a magnet. Me-chanical relays were at the time too slow and unreliable a medium (prone to failure from dust in the contacts and bending of moving metal parts) for most digital com-puting. It was the development of high-speed digital techniques using vacuum tubes (devices that control electric current between electrodes in a vacuum container) that made computing possible and was used in all early digital computers.

(28)

With a complete definition of what comprised a computable machine and a phys-ical mechanism to realize it, the first digital computers were made. These machines composed of digital logic gates constructed from mechanical relays and vacuum tubes which when combined could conduct various computations. Some notable examples of early digital computers include: a special purpose machine (the ABC) developed by John Atanasoff and Clifford Berry in Iowa State University for solving linear alge-braic equations, a fully functional and programmable machine (the Z3) constructed by Konrad Zuse and used by the German Aircraft Research Institute to perform statisti-cal analysis of wing flutter, and a large-sstatisti-cale digital machine (the Colossus) designed by Thomas Flowers in 1943 as part of a top-secret project for cracking German codes during World War II. Despite not all being programmable and Turing-complete, these machines represent the first digital machines and set the standard for the future of digital computing.

Most early digital computers however did not have a mechanism to store a pro-gram and relied on a punch card system. These punch cards (usually a strip of paper or tape) were used to encode instructions that could then be fed into a machine. The encoding procedure consisted of punching holes into the card (usually done by spe-cific keypunch operators) in a very spespe-cific pattern. The task of encoding programs onto cards however was very tedious and prone to errors: holes had to be correctly punctured, cards would easily become unreadable when ripped or damaged, pro-grams more than a few dozen kilobytes needed too many cards to be stored. In 1945 John Von Neumann proposed an architecture in the "First Draft of a Report on the ED-VAC" [40] that was composed of a processing unit (with arithmetic and logic units), a memory unit (registers, cache, ram), and a unit for inputs and outputs connected by internal buses. The Von Neumann architecture was based on the stored-program computer concept, where instruction data and program data both reside in the same memory. This design allowed for programs to be directly stored into a computer, thus replacing the punch card system. The concept of a distinct program from the ma-chine that runs it revolutionized the way computers were used and became the main architecture for modern digital computers.

Despite being a considerable advance from relay switches, the use of vacuum tubes was still largely unreliable and unsustainable for large-scale computing. Ma-chines that used vacuum tubes like the Colossus and ENIAC consumed enormous amounts of power (the ENIAC used about 2000 times as much electricity as a mod-ern laptop) and took up huge amounts of space to the point that the sheer size of vacuum tubes had become a real problem. Moreover, these devices were susceptible to several malfunctions caused by overheating and faulty circuits (the first computer bug was caused by a moth [41] short-circuiting the machine) and not very durable (like light bulbs, eventually need to be replaced). Developing computers that were an order of magnitude more powerful would have needed hundreds of thousands or even millions of vacuum tubes which would have been far too costly, unwieldy, and unreliable.

In 1948 a microelectronic revolution took place with the invention of transis-tors [42] by the physicists John Bardeen, Walter Brattain, and William Shockley from Bell Telephone Laboratories (Bell Labs). The transistor was a semiconductor device designed to regulate current and voltage flow and act as switches or gates for

(29)

elec-tronic signals. Because of its reduced cost, weight and power consumption, and its higher durability and reliability, transistors almost completely replaced the vacuum tube as solid-state electronic switches. While the first transistors was built using ger-manium, most now are based on silicon. The invention of the transistor was one of the most important developments in computing as it allowed for the construction of much smaller and more power efficient machines with tens of thousands of binary logic circuits in a relatively compact space.

The main problem with transistors was that, like other electronic components, they still needed to be hand wired and soldered together. This proved to be a very difficult and error-prone task on machines with thousands of wires and many interconnected components where the likelihood of faulty wiring was high. The solution came in 1958 with the invention of the integrated circuit [43] by Jack Kilby and Robert Noyce at Texas Instruments. The integrated circuit was a collection of transistors and other components placed on the surface of a small flat piece of semiconductor (usually silicon). Since transistors were connected together during the manufacturing of the circuit, there was no need to solder individual transistors, which greatly simplified the process of constructing complex machines. Integrated circuits brought a number of advantages over regular transistors: they were smaller in size (which meant com-puters became smaller and cheaper), faster as electrical signals had to travel much shorter distances, and could for the first time share resources across different pro-grams (multi-tasking). Over time, integrated circuits opened up the possibility of run-ning processes powered by millions of circuits, all on a microchip the size of postage stamp.

The invention of transistors and integrated circuits revolutionized the comput-ing industry. Rather than buildcomput-ing machines with thousands of tubes and parts that take up entire rooms, they could now be scaled to much smaller devices: a single block of semiconductor material (also known as a die) was usually around only a few millimeters in length and contained thousands of transistors. As the technology ad-vanced, scientists sought to reduce the size of transistors such to fit more components into the same area (higher density) which permitted the design of ever more complex, fast, and power efficient circuits. Starting with a feature size of 104 nm used in the invention of the first microprocessor in 1971 to the one of 10 nm used in chips today (see Fig. 2.8), considerable progress was made in processing and patterning technolo-gies, as well as in nanotechnology. The downward scaling of transistor dimensions however was not the only driving force in improving the performance of integrated circuits. Dimensions of dies have also grown over the past years from a surface area of 12mm2 in the early 1970s to surface areas over 100mm2 in most modern technolo-gies (more than an order of magnitude larger) augmenting the number of transistors a chip can carry. With these technological advances over the past four decades, the number of transistors on dies has nearly doubled every two years (see Fig. 2.8) fuel-ing an exponential growth of the semiconductor industry. While the first integrated circuit had only five transistors, modern chips are now manufactured with billions of transistors, an increase of six orders of magnitude. Since this exponential growth was previously predicted by Intel co-founder Gordon Moore in the early 1960s it became commonly known as the Moore’s Law [44].

(30)

nanome-Figure 2.8 – Right: Evolution of Moore’s law in terms of number of transistors, clock speed, power consumption, and performance per clock between 1970 to 2010 (data retrieved from [45, 46]). Left: Evolution of transistor feature scales over time and representative microchips. Bottom: Picture of tran-sistors for 22nm, 14nm, and 10nm scaling technologies (images obtained from [47]).

ters, semiconductor companies faced new challenges. They found that packing a lot of silicon circuitry into a small area caused the faster moving electrons to generate too much heat. Since refrigeration was expensive to maintain and would reduce profits, companies had to come up with another solution. In order to avoid overheating, they stopped trying to increase the microprocessor’s clock rate which effectively deter-mined how fast the chip could execute instructions. The clock rate has since remained fixed at around 3GHz as seen in Fig. 2.8. Since more transistors could still be in-serted onto the die, they redesigned the internal circuitry so that each chip contained not only one processor (or core) but multiple of them. Although these processors were slower and simpler, they collectively outperformed the previously more complex ones that overheated and had a much lower power consumption. With a growing number of transistors ever few years more cores could be added onto a chip. As the semi-conductor industry shifted from single to multicore processors, parallel computing became the prevalent paradigm and several parallel architectures have been devel-oped since (see Fig. 2.9).

The exponential growth envisioned by Moore however is slowly reaching a sat-uration point [49–51]. Modern chips are already being manufactured with billions of transistors and a feature scale in the low nanometer range. As transistors scale down to only a few nanometers in length, fundamental challenges in physics arise. Features are only a few dozen atoms across at these scales and quantum mechanical effects begin to take place due to the wavelike behavior of particles, making transis-tors hopelessly unreliable [52, 53]. Over the last decade several alternate technologies have been investigated to overcome these challenges (see Fig. 2.10). Novel computing

(31)

Figure 2.9 – Evolution of supercomputers between 1940 and 2018 (data collected from [48]). Red denotes the era of mono-computers (one single shared memory accessed by multiple processors) and blue denotes the era of multiple computers interconnected via high speed networks).

Figure 2.10 – Future technologies under research to continue with Moore’s Law (ob-tained from [47]).

paradigms such quantum computing [54, 55], neuromorphic computing [56, 57], and photonic computing [58] have been proposed as promising alternatives, but many of these are believed to offer advantages only for specific applications rather than for everyday tasks of digital computing. Another technology is the vertical expansion through stacking many thin layers of silicon circuitry [49]. While this works for mem-ory chips that consume power only when a memmem-ory element gets accessed (which is not that often), microprocessing chips are more challenging as stacking layers of

(32)

hot circuits only intensifies the generation of heat. Lastly, other more suitable mate-rials can replace silicon in electronics, ranging from graphene [59] and other similar compounds [60], to organic electronic materials [61] based on organic molecules or polymers using chemical synthesis, to memristors [62] which couple electrons with electrically charged atoms called ions. Which of these technologies, if any, will be capable to prevent the stagnation of the Moore Law remains an open question.

2.3

Numerical Approaches for the Ginzburg-Landau

equa-tions

The time-dependent Ginzburg-Landau equations are powerful tools for studying the macroscopic behavior of superconductors. They are used particularly to capture the transient processes of the condensate under electromagnetic fields and currents. The complex and nonlinear nature of these equations however prevents a purely an-alytical solution, with the exception in a few highly idealized contexts. Numerical methods are then used to transform the differential equations into discrete algebraic equations that can be solved on a computer. Since the numerical solutions to these methods are computationally expensive, various optimization techniques are used to make their computations physically tractable.

Over the last few decades many numerical methods have been designed towards solving the time-dependent Ginzburg-Landau equations. Because the space and time variables of the equations can be decoupled, these methods branch into two classes: spatial discretization and time-stepping methods. Spatial discretization methods are used to partition the continuous domain space into a discrete representation that can be processed on a computer. The most common are finite difference, finite element, and finite volume methods, which differ on how they partition the domain. Because of its simplicity, finite difference has been the most widely used method [63–69]. It is very effective on domains without any sharp edges and curves that can be represented with regular and structured grids, and has the unique advantage being computational inexpensive. When dealing with domains that have complex shapes and features, fi-nite element methods are the preferred choice [70–79]. These can improve the desired precision in certain regions of the domain without increasing the complexity of the so-lution (see Fig. 2.11). However, their soso-lution can be quite computationally expensive and is very sensitive to the chosen finite element mesh. Finite volume methods [80–82] are simpler than the former and especially powerful on nonuniform grids and in fluid dynamic simulations where the mesh moves to track interfaces or shocks. While finite element and finite volume methods are integral schemes, finite difference, as its name suggests, is based on a difference method. Time-stepping methods on the other hand are used to advance transient solutions of the equations over time. These methods can be either explicit, implicit, or semi-implicit depending on the dependence of the solution on the current and previous state. Explicit methods [63] are easy to imple-ment and have a low cost per time step. But they can take a long time to converge in cases where small time steps are needed to keep the error bounded. Implicit meth-ods [68, 73, 74, 83–86] on the other hand are usually unconditionally stable and useful in solving stiff problems which explicit methods cannot within reasonable times. But

Referências

Documentos relacionados

Os motivos dos estudiosos, cristãos, judeus e seculares, são compreensíveis: um pequeno corpo de escritos, primeiro em hebraico, depois em grego, produzidos numa

Ousasse apontar algumas hipóteses para a solução desse problema público a partir do exposto dos autores usados como base para fundamentação teórica, da análise dos dados

– An important concept in condensed matter physics is that of an order parameter (OP), intro- duced by Lev Landau in the last century to describe the transition to the

Independentemente do tipo de sistema de drenagem público ou da sua inexistência, a montante da câmara de ramal de ligação os sistemas de drenagem de águas

Para tanto foi realizada uma pesquisa descritiva, utilizando-se da pesquisa documental, na Secretaria Nacional de Esporte de Alto Rendimento do Ministério do Esporte

compreensivas, como é exemplo as histórias de vida, os sujeitos entrevistados tomam o estatuto de informadores privilegiados, numa postura diferente das dos entrevistados

Colunas de diferentes cores seguidas pela mesma letra minúscula ou maiúscula, não diferem significativamente entre si (P =0,05, teste LSD)... Colunas de diferentes cores seguidas

Previous work from our group demonstrated that Paracoccidioides presents enolase, as a protein able to bind and activate plasminogen, increasing the fibrinolytic activity of