• Nenhum resultado encontrado

Results from MICHE II - Mobile Iris CHallenge Evaluation II

N/A
N/A
Protected

Academic year: 2021

Share "Results from MICHE II - Mobile Iris CHallenge Evaluation II"

Copied!
8
0
0

Texto

(1)

ContentslistsavailableatScienceDirect

Pattern

Recognition

Letters

journalhomepage:www.elsevier.com/locate/patrec

Results

from

MICHE

II

– Mobile

Iris

CHallenge

Evaluation

II

Maria

De

Marsico

a,∗

,

Michele

Nappi

b

,

Hugo

Proença

c

a Sapienza University of Rome, Italy b University of Salerno, Italy

c IT: Instituto de Telecomunicações, University of Beira Interior, Portugal

a

r

t

i

c

l

e

i

n

f

o

Article history: Available online xxx MSC: 41A05 41A10 65D05 65D17 Keywords:

Mobile iris recognition MICHE-II competition Biometric authentication

a

b

s

t

r

a

c

t

Mobile biometricsrepresentthe newfrontierofauthentication. Themostappealing feature ofmobile devicesis thewideavailability andthepresenceofmoreandmorereliable sensorsforcapturing bio-metrictraits,e.g.,camerasandaccelerometers.Moreover,theymoreandmoreoftenstorepersonaland sensitivedata,thatneedtobeprotected.Doingthisonthesamedeviceusingbiometricstoenforce secu-rityseemsanaturalsolution.Thismakesthisresearchtopicattractingandgenerallypromising.However, thegrowinginterestforrelatedapplicationsiscounterbalancedbystillpresentlimitations,especiallyfor sometraits.Acquisitionandcomputationresourcesarenowadayswidelyavailable,buttheyarenot al-wayssufficienttoallowareliablerecognitionresult.Mostofall,thewaycaptureisexpectedtobecarried out,i.e.,bytheuserhim/herselfinuncontrolledconditionsandwithoutanexpertassistance,canheavily affectthe qualityofsamplesand,asaconsequence,theaccuracy ofrecognition.Amongthebiometric traitsraising theinterestofresearchers, irisplaysanimportantrole.Mobile IrisCHallenge Evaluation II(MICHEII)competitionprovided atestbedtoassess theprogress ofmobileirisrecognition,as well asitslimitationsstilltoovercome.Thispaperpresentsthe resultsofthecompetitionand theanalysis ofachievedperformance,thattakesintoaccountbothproposalssubmittedforthecompetition section launchedatthe2016editionoftheInternationalConferenceonPatternRecognition(ICPR),as wellas proposalssubmittedforthisspecialissue.

© 2016ElsevierB.V.Allrightsreserved.

1. Introduction

Mobileequipmentisubiquitousnowadays.Smartphonesare in-exorably substituting “old” cellular phones, that in turn had re-placed traditionallandlinesata significant extent.Asa matterof fact,itisnotveryfrequenttofindapublictelephone,thatwasin thepastacharacteristicelementintheurban scenario.The possi-bility tocommunicate almost wherever andwheneverrepresents a significant sociological phenomenon. Notwithstanding this, the “whatever” optionofferedbythenewcommunicationdevicesand protocols can be seen as theactual revolution. The use ofsmart mobiledevicesformerecommunication,initstraditionalmeaning of“speakingwithsomebody” hasbecomequitemarginalwith re-specttomoreadvanced communicationmodalities.Theseinclude sending in realtime almost any kindof multimedia information, thereforedramatically decreasingthe timeneededfor dissemina-tionofideasandeventaccounts.Oneoftheaspectsrelatedtothis newtechnologicalscenarioisthepossibilitytostorepersonal,and often sensitivedata directlyon the mobile devices,andremotely

Corresponding author.

E-mail address: demarsico@di.uniroma1.it (M. De Marsico).

connectingto eithersocial platforms orto personal services.The latteroften entail the exchange ofsensitive information,and re-quiresa twofoldapproachto addressan increasedsecurityneed: fromoneside,itisnecessarytoreliablyidentifytheownerbefore theuseofthedevice,whilefromtheother sideitisnecessaryto reliablyidentifytheuserofaremoteserviceatthemomentthe de-viceisusedtoconnecttoit.Itiswellacceptedthatbiometricscan bothenforceandmakeauthenticationsimpler.Insimplecases,the presentationofarobust biometrictraitforrecognitioncan substi-tutecomplexpasswordsorcards.Thenextsteponwhichresearch is focusing is to move biometrics on mobile. Mobile biometric recognitionisthenewadvancedfrontierforsecureuseofdataand services,eitherlocallyorremotely.Itprovidesafurtherapplication forusermobileequipment,whichareubiquitousnowadays.These include both by personal devices, e.g., smartphones and tablets, butalsotheincomingwearabledevices,e.g.,smartwatches,given that they are equippedwithsuitable sensors. Captureof biomet-ric traitsinany placeis both thestrength ofthis newapproach. Capturedinformation(sample) canbe compared withthat stored eitheronthedeviceitself,orevenwithinRFIDtags,smartcardsor machinereadableidentificationdocuments(IDs),toverifyowner’s identityby a 1:1matchoperation.As an alternative,the possibly http://dx.doi.org/10.1016/j.patrec.2016.12.013

0167-8655/© 2016 Elsevier B.V. All rights reserved.

(2)

pre-processedsamplecanbesenttoaremoteserver,for identifica-tioninasetofrelevantsubjectsbya1:Nmatchingoperation. Mo-biledevicesusedforbiometricrecognitionmust thereforeembed suitable hardwareequipment andsoftware applications, allowing tocaptureandpossiblyprocess datafromoneormorebiometric traits.However, the any-time,any-place optionis also one ofthe weakpointsofthisnewauthenticationparadigm.

Applicationsmustbedesignedforintuitiveoperation,especially if it is not planned to assist users during sample capture. The environmentalconditions might be adverse and causea number ofartifacts,e.g.,shadows, reflections,oracousticnoise, hindering an accurate recognition.Moreover, theuser mightnot be ableto evaluatethesufficient qualityofthe obtainedcapture, ableto al-lowareliableprocessing.Thecaptureddatamustbesuitably con-vertedbysoftwareintodigitaltemplatesforstorageandmatching against other records. Feature extraction, storing and processing, might require non negligible resources. Therefore, notwithstand-ing the continuous advances in technology and resources, trans-ferringall thephasesof biometricprocessingon amobile device calls forfasterand lighter procedures (e.g.,see DeMarsico etal.

[12]),andformoreefficientstorage,thatmightnotbeuniversally available yet.Techniques targetedatmobile devices muststill be suitablyadapted. It is interesting therefore to assess the level of performance that we can realistically expect froma mobile bio-metricapplication.

Amongpossiblebiometrictraitsthatcanbeprocessedina mo-bile setting,iris plays a relevant role. Irisacquisition is little in-trusive,since itcan be carriedout at areasonable distancefrom theeyetoavoiduserdiscomfort.Inaddition,iriscodesareamong the lighter templates to store. Related research achieveda quick performanceincrease.ThepioneeringworksbyDaugman[11] and Wildes[25],asithappenedforotherbiometrictraits,mostly per-taincontrolledsettings.Morerecentchallengesaddressiris recog-nitioninlesscontrolledand/ormobilesettings[13,22].Amongthe mostadvancedapproaches,itisworthmentioningtheuseofdeep learning[18].Atpresent,irisrecognitionsystemsoperatingin con-trolledconditionsstill requireadistance ofabout1morless be-tweenthesubjectandthe capturedevice. Moreover,theuserhas to look towards the device forabout 3s. The first iris biometric competitionshave reliedon images acquiredinthese conditions. Among themost well-known,we can mention theIris Challenge Evaluation(ICE)[20].ProençaandAlexandre[21] haverather tack-led the problemof noisy iris recognition. A further difference is that they aimed at assessing performance over images captured in visible light (VL) instead of near-infrared (NIR). This point is still debated in the research community. VL imagesusually con-tainmorefeatures and, insome cases,moredetails thanthe NIR images.However, they are also more seriously affected by prob-lemsduetouneven illuminationandmaypresentmanynoisy ar-tifacts,especiallyreflectionsoflightsourcesand/orobjectspresent inthe environment.Moreover, their processing suffersfrom dark pigmentation[17].Thisraisesanapparentcontradiction:more de-tail does not necessarily mean more information. The work by Hollingsworthetal.[17] demonstratedthathumanscanbetter rec-ognizeimagesoftheperiocularregion,includingskinpatchesand otherelementslikeeyebrows,ifacquiredinvisiblelight.Asa mat-teroffact,theseimagesshowmelanin-relateddifferencesthat do notappearinNIRimages.However,thesituationisoftenreversed, whenthe recognition operation regards the iris alone. IR images containcleanerandmoreeasily distinguishablefeatures, andthis makesirisrecognitioninIRgenerallymorefeasiblethaninvisible light.Inaddition,darkpigmentationrepresentsahardconditionin VL,sinceitmayhidedetailsthatareotherwisewell detectablein lightercolorsirises.Accordingtoabalancebetweenprosandcons ofthe differentapproaches, commercial iris recognition usesNIR acquisition. Since mobile biometricsare becoming quite popular,

themarketalsostartedtoofferNIRattachments madeformobile devices.However, in everydaydevices, bothNIR sensors andNIR attachments are still quite rare,and itis difficult to anticipateif they willreachasufficient diffusion,ifno largelyshareduser re-quirement arises. The aim of Mobile IrisCHallenge Evaluation II (MICHE-II)competition,wasto assesswhich levelofperformance can be achievedwithout specialequipment. The aimof the con-test,wastocollectrelevantcontributionstothefieldofmobileiris recognitioninbothacademyandindustry.Asectionofthecontest waslaunchedinconjunctionwithICPR2016,whilefurther propos-alswere submitted forthisSpecialIssue.Thispaperpresentsthe comparisonoftheeightbestperformingalgorithms.Inthe follow-ing, Section 2 willpresent thebenchmark used forthe competi-tion.Section3 describesthecompetitionsetup,bypresentingthe commonsegmentation algorithmprovidedtocompetitorsandthe performance measures used for the evaluation. Section 4 briefly presentsthebest eightalgorithms participatinginMICHE-II chal-lenge.Section5 presentsanddiscussesthecompetitionresults. Fi-nally,Section6 drawssomeconclusions.

2. MICHE-IIdatabase

In biometricsresearch, the advancements in recognition tech-niques, andin particular the progressiveloosening of constraints on acquisition conditions, usually calls for increasingly challeng-ingbenchmarks. Thishappenedforiristoo.The ChineseAcademy ofSciencescollectedandmadeavailable tothescientific commu-nitythefirstgroupofpubliclyavailabledatasetsofsignificantsize dealingwithiris images,namelyCASIA-Iris. since2002, the orig-inaldataset hasbeenupdated from CASIA-IrisV1to CASIA-IrisV4. Inalldatasetversions,imagesarecollectedunderNIRillumination orcontainsynthesizedparts(CASIA-IrisV1).Forthesereasons,they cannotbeusedforageneralassessmentofmethodsentailing mo-bileacquisition,whereNIRsensorsarenotsufficientlywidespread in mobile devices. The mentioned ICEcompetitions used images acquiredwiththesameclassofsensors.UBIRISdatasetswere col-lectedinordertoratherassessproposalsofirisrecognitioninVL. TheirimageswereacquiredandmadeavailablefromSOCIALabat UniversityofBeiraInterior(Portugal),thatorganizedthetwoNICE challengesthat will bediscussed inthe next section.The images were captured in visible light anduncontrolled conditions. Such conditionsaresimilar tothose thatMICHE-IIcompetition wanted to address. However, acquisition exploited cameraswith a better resolutionthanaverageonesembeddedinmobiledevices.

Theaim ofMICHE-I dataset,whichispublicly available tothe scientific community and represents the coreof the still unpub-lishedMICHE-IIdataset,istorepresentthestartingcoreofawider dataset to be collected using differentmobile devices, in uncon-trolledconditions,withdifferentillumination,andwithoutthe as-sistanceofan operator. Thisshouldbetter allowunbiased assess-mentofcross-deviceandcross-settinginteroperabilityof recogni-tionprocedures. In fact,thedataset allows atwofoldassessment. Itallowstomeasuretheabilitytomatchsamplesofthesame sub-ject fromdifferent devices,andin differentenvironments. In ad-dition, more in general, it allows to measure the ability to han-dlesamplesacquiredbydeviceswithdifferentcharacteristicsand indifferentsettings,without asignificant degradationof recogni-tion performance. MICHE-I, the core of MICHE-II, is a dataset of irisimagesacquiredinvisiblelightbydifferentmobiledevices.Its key features are: (1)a sufficient population ofusers; (2) theuse ofdifferentmobiledevicesembeddingsensorswithdifferent char-acteristics;(3) the acquisition process including differentsources ofnoise;and(4)moreacquisition sessionsindifferenttimes.The imagesinthedatasetarefurtherfullyannotated,asdetailedinthe following.Threekindsofdeviceswereusedforirisimagecapture:

(3)

Galaxy Samsung IV (GS4) smartphone: Google Android Oper-ating System, two cameras: CMOS posterior camera with 13 Megapixel(72dpi)and2322× 4128resolution,andCMOS an-teriorcamerawith2Megapixel(72dpi)and1080× 1920 res-olution;

iPhone5 (IP5) smartphone: Apple iOS Operating System, two cameras: iSight posterior camera with 8 Megapixels (72 dpi) and1536 × 2048 resolution,andanterior FaceTime HD Cam-erawith1.2Megapixels(72dpi)and960× 1280resolution; Galaxy Tablet II (GT2): Google Android Operating System, no

posterior camera,0.3 Megapixels anterior camera with640 × 480resolution.

Inordertohaveimageswiththesamecharacteristicsofthose expectedina mobileoperation, thesubjectsparticipatingindata collection were instructed to behave as they woulddo in a nor-malsituation,e.g.,subjectswearingeyeglassescouldeitherchoose to remove orkeep them, andtake adifferent choice in different sessions. Theycapturedself-imagesofone oftheir irisesonly, by holding the mobile device atthe distance they considered to be suitable,withaminimumoffourshotsforeachcameraand acqui-sition mode (indoor,outdoor). Indooracquisition setup included various sources ofartificial light, sometimes combined with nat-urallight sources.Outdoor acquisitionwascarriedout using nat-ural light only. As a consequence of the different sensor charac-teristics, the different groups of images, for each device and for each camera, havedifferent levels ofresolution, which is one of thefactorsthatpossiblynegativelyaffectcross-devicerecognition. MICHE-I datasetimagesareaffectedbydifferentsourcesofnoise: (a) reflectionscaused by artificial/naturallight sources, peopleor objects in the scene; (b)out of focus; (c)blur, due either to an involuntary movement of the hand holding the device, or to an involuntarymovementofthehead/eye;(d)occlusions,dueto eye-lids, eyelashes,hair,eyeglasses,orshadows;(e)device-specific ar-tifacts, dueto thelowresolutionand/or tothespecificnoise pat-ternofthedevice;(f)off-axisgaze;(g)variableillumination; and (h) differentcolor dominants.The lack ofprecise framing andof fixeddistanceinthecapture,resultinvariablesizesoftheregion usefulforrecognition.Asamatteroffact,bothimagescontaining well centered eyes andimages containinghalf faces are present in dataset. This can be considered as typical of mobile captures performedbytheusers,whichusuallyholdthedeviceneithertoo close nor atarm-length. As a consequence, eyelocalization, that mustbeperformedina pre-processingstep,mayresultmore dif-ficult.Insome cases,itispossibletoexploittheextended perioc-ularregionforrecognition,ifthisis includedinthecaptured im-age, but this cannot be taken forgranted. The dataset has been collectedduringdifferentacquisitionsessions,ofcourseseparated in time. The time elapsed between the first andsecond acquisi-tionofasamesubjectvariesfromtwotoninemonths.Atpresent, MICHE-I containsimagesfrom75differentsubjects,with1297by GS4,1262imagesfromIP5,and632imagesfromGT2.

Asalreadymentioned,MICHE-Iimagesareannotatedto main-tainanumberofusefulinformation.TheXMLannotationsinclude thefollowingtags:

filename: thenameoftheimageto whichtheXMLfile refers; itis composedaccordingto aconvention allowing aquick re-trievalofthedesiredimage(s);

imgtype:thetraitcapturedintheimage,sincefaceimageswill beincludedsooninthedataset;

iris:theiristhatwasacquired(right,leftorbothwhenthe im-agecontainsbothirises);

distancefrom the device: distanceof the user fromthe acqui-sitioncamera,measured toprovideafurtherassessment infor-mation;

sessionnumber:thenumberoftheimageacquisitionsession;

imagenumber:imageordinalnumber;

user: identification number of the subject, together with age, genderandethnicity;

device: all information aboutthe capture device: type, name, cameraposition(frontorrear),resolutionanddpi;

condition: informationaboutcaptureconditions: location, illu-mination;

author: the XML file also contains the name of the labora-tory/institutionwhomadethatacquisition.

MICHE-I was the dataset provided to participants to MICHE-II challenge for training. Further sequestered images with simi-lar characteristics were captured and used to evaluate the final ranking. Thesewill be addedto MICHE-I to obtain the complete MICHE-IIdataset,that willbe soonavailabletotheresearch com-munity.1

Figs.1 and2 inSection3.2 belowshowsome examplesof im-agesinMICHE-I.

3. MICHE-IIcompetitionsetup

Thissection presents thesetup that was chosen for MICHE-II competition,startingfromtheelementsthatdistinguishitwith re-spectto preceding challenges, andthen presenting the details of thecommonframeworkprovidedtoparticipants.

3.1. ThegoalofMICHE-IIcompetition

Theprevioussectionhasalreadymentionedthefirstwide chal-lenges in iris recognition, namely ICE. Since those competitions usedNIRimagesoftheiris,acquiredincontrolledconditions,itis clearhowtheydifferfromMICHE-II.Amoresimilarcontextin en-tailedbuNICE contests.TheNoisyIrisChallengeEvaluation(NICE I),2 organizedbySOCIALabatUniversity ofBeiraInterior (Portu-gal),exploitedimagescapturedinunconstrainedimaging environ-mentstoevaluateatwhichextentnoisecanaffectsiris segmenta-tion.Tothisaim,theproposedirisdatasetUBIRIS.v2[23] contains data captured inthe visible light (VL),at-a-distance, namely be-tween4and8m,andonthemove.Theresultsachievedby partic-ipantmethods confirmthemajorimpactthatuncontrolled condi-tionshaveonrecognitionperformance.Recognitionofvisiblelight (VL)irisimagescapturedat-a-distanceandonthemovewithless controlledprotocolswasthetargetofthefurtherNICEII contest3 [21].WhileNICE Icontestwasfocusedonirissegmentation,NICE II rather addressed iris recognition. The segmentation masks ob-tained by the best algorithm from NICE Iwere provided to par-ticipants, thatused themto selecttherelevantregion forfeature extraction and matching. The pair of competitions MICHE-I and MICHE-IIhasbeenconceivedalongasimilarline.

MICHE-I challenge [13] addressed issues related to iris acqui-sition and segmentation by mobile devices. In this new context, the basic assumption is that the subject to be recognized au-tonomously operates the capturing device. MICHE-I provided to participantsthedatasetwiththesamename,tobeusedasa com-monbenchmark,andsuitabletoassesstheperformanceof biomet-ricapplicationsrelatedtothisspecificsetup.Twoopposite consid-erationshold.The usually short distance(the length of a human armatmaximum) canincreasecapturingaccuracy/quality.A fur-therelementtoconsideristhenaturaltendencyoftheusertends to assume a frontal posein selfies. The reverse of the medal is that thequality ofthe capturedimage can sufferfromboth pos-siblelower resolution,duetothecapturedevice,andfromout of

1http://biplab.unisa.it . 2http://nice1.di.ubi.pt . 3http://nice2.di.ubi.pt/ .

(4)

Fig. 1. Examples of segmentation of “good” images with the algorithm proposed by Haindl and Krupi ˇcka for MICHE-I.

Fig. 2. Examples of segmentation of “problematic” images with the algorithm proposed by Haindl and Krupi ˇcka for MICHE-I.

focus/motionblur,incorrect framing, andilluminationdistortions, causedby both thekindof device andby thelack ofexperience oftheuserincontrollingthecaptureoperation.Duetothese dis-tortionfactors, the robustnessof detection/segmentation and en-codingproceduresmust besuitablyimprovedin ordertoachieve goodresultsinthiscontext.Obviouslytheaccuracyofthe encod-ing,i.e., theabilitytocorrectlyextract relevantanddiscriminative features,andthefollowingrecognition,areheavilyaffectedbythe qualityofthesegmentation.Asmentionedabove,thecomposition

ofthedatasetusedforMICHE-IIchallengeisbasicallythesameof MICHE-I,withtheadditionofnewunpublishedimagestobeused inthecompetitorsrankingprocess.

3.2. Thecommonsegmentationalgorithm

Accordingtothepolicy establishedbyNICE-IandNICE-II com-petitions,the problemofiris recognition wastackledby the two separate challenges: MICHE-I addressing segmentation, and the

(5)

following MICHE-II addressing recognition. In order to ensure a fairandunbiasedevaluationfocusedonrecognitiononly,all com-peting groups registered forthe challenge hadto use a common segmentation algorithm, namelythebest segmentation algorithm from MICHE-I. This algorithm, proposed by Haindl and Krupiˇcka

[16], is focused on the detection of the non-iriscomponents in-sidetheparametrizedirisring.Thesegmentationprocedurestarts fromareflectiondetectionstep.Afterthis,form-fittingtechniques produce aparametrizationofthe pupil.Next,asit oftenhappens in iris-related methods, data isconverted into the polardomain, where texture analysis identifies the regions of the normalized data according to a Bayesian paradigmseparating iris from non-iris.AlltheMICHE-IIcompetitormethodsstartfromthe segmenta-tionproduced.Fig.1 showssome examplesof“ideal” images cap-turedbothindoorandoutdoor,withthecorresponding segmenta-tionmaskobtainedbythealgorithmbyHaindlandKrupiˇcka.

Itis interestingtonotice that,thoughreflectionsare more ev-ident outdoor, a more diffused and uniform illumination creates favorableconditionsforlocalizationandsegmentation.

Fig.2 showssomeexamplesofproblematicimages.They sum-marize a number of problems that can be found in MICHE-I dataset, that can be generically classified as “occlusions”. For in-stance,theycandependfromthepresenceofglasses,thatcan cre-atesharpshadowsintheregionofinterest,especiallyindoor,and in generaltoa scarcely visibleiris region,so that thenumber of pixelsthatcanbeusedforrecognitionisreduced.

Itispossibletounderlinethatinsome casestheeyeisframed quite wellinthecapturedimage, whileinother casesahalfface iscaptured.

3.3. Performanceevaluation

Different feature extraction procedures can produce different kindsoftemplates,thatmayrequirespecificapproachesto similar-ity/distanceevaluation. Forthis reason, thecompetitors were left freetochooseanysuitabledistancemeasurefortheproducediris templates, with the only constraintto be a semi-metric at least. The dissimilarity score chosen by each competitor was meant to represent the probability that two irises are from two different subjects. The higheris the dissimilarity, thehigher is the proba-bilitythatthetwoirisesarenotfromthesameperson.According to this, givenI theset ofimages fromMICHE-II database, andIa andIbI,thedissimilarityfunctionDhadtobedefinedas:

D:Ia× Ib→[0,1]⊂ R (1)

andhadtosatisfythefollowingproperties: 1. D

(

Ia,Ia

)

=0

2. D

(

Ia,Ib

)

=0→Ia=Ib 3. D

(

Ia,Ib

)

=D

(

Ib,Ia

)

Thefinalresultreturnedbyeachalgorithmhadtobeafull dis-similarity matrix among probe and gallery sets provided in. The competition procedure avoided a possible bias implied by em-beddingspecialprocessingintothealgorithms toimprove perfor-mance, sincenewimages wereadded andnewdistance matrices were computedin orderto createthe final rank.Distance matri-ces produced by each methods were used to compute the usual Figures Of Merit (FoM) to rank them, namely Recognition Rate (RR)foridentification,andReceiverOperatingCharacteristic(ROC) curves,inparticulartheAreaUnderCurve(AUC),forverification.

4. Competitoralgorithmsataglance

Thissection briefly presentsthebest eightalgorithms submit-tedforMICHE-IIchallenge. Algorithmsare listedaccordingtothe finalrankingachieved.Thegiveninformationissufficienttogivea

self-containedaccountforresults.Moredetailscanbefoundinthe papersinthisSpecialIssue.Eachalgorithm islabeledin orderto identifyitinthetablesinthesectionpresentingtheexperiments.

Thebestrecognitionresultswereachievedbythealgorithm de-notedby thelabel

tiger_miche

anddescribedin[6] (extended in[7]),withthealgorithmthatinthefollowingwillbe.This algo-rithmscarriesoutirismatchingbyacombinationofapopulariris codingapproachesandaperiocularbiometricprocessingbasedon theMulti-BlockTransitional LocalBinaryPatterns(LBP).The algo-rithmcomputesiriscodeHammingdistanceandperiocular match-ing scores separately, andthen combines the resultsby a score-level fusion to improve the system accuracy. Since the matchers produce values in different ranges and with very different score distributions,z-scorenormalizationisused.

The following algorithm is labeled

karanahujax

and is de-scribedin[8] (extendedin[9]).Thepaperfirstpresentsabaseline model,basedonRootScale-InvariantFeatureTransform(SIFT).The segmentedirisimage isoverlaid withthebinarymaskto getthe irisimageridofocclusions.ThenDensecolorRootSIFTdescriptors

[10] arecomputed, giving keypoints withidentical sizeand orien-tation.Then two stackedconvolution-based deeplearningmodels (ConvolutionalNeuralNetworks – CNNs)are designedto identify agivensubjectfromaperiocularimage.The CNNsaretrainedon aset ofperiocularimagesaspartofthe learningphase.Thetwo convolution-basedmodelsforverifyingapairofperiocularimages containingthe iris are compared amongst each other aswell as with the baseline model. In the first approach, deep learning is implementedin an unsupervised manner(Model 1). The method uses a stacked convolutional architecture, after learningexternal models a-priori on external facial and periocular data, on top of thebaseline modelapplied onthe provideddata. Afterwards dif-ferentscorefusionmodelsaretestedtogetthefinalresult.Inthe secondapproach,theauthorsstilluseastackedconvolution archi-tecture,butthediscriminativefeaturerepresentationsislearnedin asupervisedmanner(Model2).

Thealgorithmwithlabel

Raja

isduetoRajaetal.[24].They proposemulti-patchdeepfeatures usingdeepsparsefilters,in or-der to obtainrobust features foriris recognition.The patch level representationisobtainedby dividingtheiris image intoa num-berof patches,that are mappedonto a collaborativesubspace to perform classification via maximized likelihood. The approach is designedtoworkalsowithasingle-samplegallery.Ofcourse,the advantageentailedby thepatchbasedapproachis thatthenoise fromexternalilluminationandeyelidscanbeconsidered as local-ized within individual blocks, so that other blocks are relatively noisefree andcanbe reliablyusedforextractingrobustfeatures. Inordertoavoidmissingrelevantglobalinformation,thefeatures areextractedfromtheholistic imagewhichisjointlyrepresented withthepatches.Theinformationcorrespondingtored,greenand bluecolorchannelsseparately.Theimagesobtainedforeach chan-nelfromsegmentationandnormalizationaredividedintoa num-berofblocks.Alongwiththenumberofblocks,thewholeimage isalso processed to obtain deepsparse histograms using the set ofdeepsparsefilters.The setofhistogramsobtainedfrom differ-entchannelsandblocksareconcatenatedtoformthefinalfeature vector.Inordertoinordertodemonstratetherobustfeature rep-resentationprovidedbytheproposedscheme,theauthorsemploy simpledistancemeasuresusing

χ

2andCosinedistance.

Thenextalgorithmislabeled

irisom

anddetailscanbefound in [1] (extended in [19]). Simple image processing techniques, likecontrastenhancementandhistogramadjustment,areused to-getherwithunsupervisedlearningbySelfOrganizingMaps(SOM). Thealgorithmfirstmatchestheoriginalimageinpolarcoordinates withthesegmentationmask,inordertodiscardallnon-significant pixelsinthesurroundingoftheiris.ASOMnetworkisthen con-figured andtrained withpixels of the pre-processedimage, thus

(6)

buildingthefeature matrixthat clustersthe iris pixels. TheSOM networkisfed withRGBtriples togetherwithlocalstatistical de-scriptors.These are kurtosis andskewness, which are computed atpixel level in a neighborhood window of 3× 3 size. The out-putofthenetworkis afeaturemap withtheactivation statusof theneuronsforeachpixel.Inpractice,suchamap representsthe clusterdecompositionoftheimage,whichprojectstheproblemof irisrecognition onto alower dimensional space.Onthe obtained featuremaps,thealgorithmcomputestheHistogramofGradients (HOG)overtheobtainedfeaturemaps,andtheresultisusedasa featurevector representingtheiris. Toverifythesubjectidentity, thePearson coefficient in the[0,1] realinterval isused, to mea-surethecorrelation betweentwo images.The Pearsoncorrelation isused as the probability that the two irises are from the same subject.Resultsreportedherereferto5× 5and10× 10SOM.

The algorithm proposed by Galdi and Dugelay has label

FICO_matcher

[14] (extendedin[15]). Itusesofa combination of classifiers exploiting the iris color and texture information. Moreover cluster information is coded, where “clusters” are the small color spots that often characterize the human iris. For each cluster, centroid coordinates, orientation, and eccentricity are concatenated in a feature vector. A multi-layer approach is proposed, where layers are derived for each color channel of the color space at hand. The best performances are ob-tained using the a∗ and b∗ channels of the CIE 1976 Lab∗ color space. Color values are first normalized between 0 and 255, then dividing the resulting grey values into 8 intervals of size 32

(

32∗ 8=256

)

, i.e. 8 layers (images) are obtained from each color channel. Two versions of the algorithm were pro-posed for the competition, differing in the fusion formula for the DIST(ances) computed over the different descriptors (color, texture and cluster). As for Version 1 (V1), DIST_V1final= DIST_color∗ 0.2+ DIST_texture∗ 0.4+ DIST_cluster∗ 0.4; as for Version2(V2),DIST_V2final=DIST_color∗ 0.2+ DIST_cluster∗ 0.8 (texture descriptor is not used). An interesting characteristic is representedbyitslimitedcomputationaltime,particularlysuitable forfastidentitycheckingonmobiledevices.Inaddition,thecode is highly parallel, so that this approach is also appropriate for identityverificationonlargedatabase.

Aginako-Bengoa et al. [4] (extended in [5]) propose the al-gorithm labeled

otsedom

. It exploits both Machine Learning paradigms,andComputerVisiontechniques.Wellknown descrip-torsarecomputed,suchasLBP,LPQ,andWLD.Theyareused indi-viduallyinordertoconstructaclassifier,andthensubsetsofthem arecombinedtooutperformtheobtainedaccuracy.Thefinal algo-rithmcombinesthebestfivedescriptorstoobtainarobust dissim-ilaritymeasureoftwogivenirisimages.Machine Learning classi-fiersareused toperformthe individualclassifications, andhence toobtain the a-posteriori probability distribution for each ofthe twoirisimages.Histogramdistancebetweenthetwodistributions isusedtocomputethedissimilarity.Toperformthefinalclassifier combination, fivedifferent classifiersare used, each one givinga differenta-posterioridistributionforeachimage.Themodeofeach a-posterioryprobabilityforeachclassvalueisusedtocombinethe fiveclassifiers,andthedistanceofthetwo modehistograms(one foreachiris)isusedasdissimilaritymeasure.

Finally, a different group of participants submitted the algo-rithm with label

ccpsiarb

[2] (extended in [3]). Even in this case,the proposedapproach isa combinationoftechniquesfrom Machine Learning and Computer Vision. First, an image classifi-cation process is carried out to classify the images as belong-ing to one ofa given set ofclasses. This stepinvolves both Ma-chineLearningparadigms,toperform theclassification itself,and image transformations to improvethe accuraciesof the obtained models.Regardingthelatter,differentmodificationsoverthe orig-inalpictures provide different views of the images.Examples of

thetransformationsconsideredareEqualization,Gaussian,Median, etc. The main goal ofthis phase is to havevariability in the as-pectthepicture offers,so toobtain differentvaluesforthesame pixel positions. The classifiers derive from some well known ML supervisedclassificationalgorithms, withcompletely different ap-proachestolearning,andalongtraditionindifferentclassification tasks:IB1,NaiveBayes,RandomForest andC4.5.Experimentstook into account the 19 image collections obtained by applying sin-gle transformations andthe fourdifferent classifiers,givinga to-tal 76 experiments. After testing these combinations of different imagetransformationsandMachineLearningalgorithms,theEdge transformation followed by IB1 classification (identified as com-bination

ccpsiarb

_17) is identified asthe combination provid-ingthebestresults.Followingthiscombination, verycloseresults were achieved by

ccpsiarb

_2 and

ccpsiarb

_42, obtained by Equalize+IB1,andbyGaussianfilter+IB1respectively.Asa nov-elty,the dissimilaritycomputationbetweentwo imageshasbeen computed as an a-posteriori histogram difference of the classes distributionreturnedbytheMachineLearningalgorithm.

5. Competitionresultsanddiscussion

In order to ensure a fair comparison, all algorithms were run from scratch at BipLab - University of Salerno, over an ex-tended set of images (see Section 3.1) after segmenting them withthesegmentationalgorithmprovidedforthecompetition(see

Section 3.2). The final rank list in Table 1 reports the best per-formingversionamongtheonessubmittedforeachauthor(label). TherankwasobtainedbyaveragingtheRecognitionRate(RR)and theAreaUnderCurve(AUC)achieved,andconsideringimages cap-turedbythetwosmartphones.Weremindthatauthorscould pro-posetheir owndistancemeasure, giventhatit wasametric (see

Section3.3).

Table 1 showsthat the better the rankingachieved, the more stable the method with respect to the test setting. The “hard-est”oneisofcourseALLvsALL,entailingthatgalleryandprobe im-ages maycome fromdifferent devices.All methods provide con-sistentlylowerperformancesinthiscondition.Ontheaverage,the imagesover whichthebest resultsareachievedin homogeneous settings(galleryandprobesfromthesamedevice)comefromIP5 (mean value = 0,89), and the achieved results further present a lower standard deviation (0,086). This mightbe unexpected,due tothe lower resolutionofthecamera, andseems tosuggest that higher resolutionmay also increase the waythe noise typical of iris images can affect recognition. A complementary observation regardsthewaythedifferentmethodsbehavewithrespecttothe differentdevices.Excluding

tiger_miche

,alltheotherspresent betterresultseitherwithonecameraortheotherwithsignificant differences, except of

otsedom

. Inparticular, fourof them work betterwithIP5(lowerresolutionimages).

Inorder toevaluate thestabilityofthe methodssubmitted in moreversions,theextendedrankingshowninTable2 includesthe differentversionsinthelistofresults.

We can notice that some methods are more stable across their variants. For instance,

karanahujax_Model1

and

karanahujax_Model1

achievethesamefinalscore,butModel2 was preferredin the final ranking dueto the better behavior in ALLvsALL.Asimilar constantbehaviorisobserved for

ccpsiarb

, while

FICO_matcher_V2

achieves dramatically worse results than

FICO_matcher_V1

.

Finally,Table3 showsindetailthe resultsintermsofRR and AUC achieved by the competing method, dividing them accord-ingtotheinter-deviceorintra-devicesetting,andalsoreportsthe computingtimepermatchingoperationinseconds.

Dueto thepossible changein rankingamongmethods in dif-ferent settings,no specific ordering is provided for the valuesin

(7)

Table 1

The final ranking of the best methods submitted to MICHE-II competition. Rank Algorithm ALLvsALL GS4vsGS4 Ip5vsIP5 Final Rank

1 tiger_miche 0 ,99 1 ,00 1 ,00 1 ,00 2 karanahujax_Model2 0 ,89 0 ,89 0 ,96 0 ,91 3 Raja 0 ,82 0 ,95 0 ,83 0 ,86 4 irisom_10_10 0 ,79 0 ,82 0 ,88 0 ,83 5 FICO_matcher_V1 0 ,77 0 ,78 0 ,92 0 ,82 6 otsedom 0 ,78 0 ,80 0 ,78 0 ,79 7 ccpsiarb_17 0 ,75 0 ,72 0 ,77 0 ,75 Table 2

Results from different versions of the competing methods.

Rank Algorithm ALLvsALL GS4vsGS4 Ip5vsIP5 Final Rank

1 tiger_miche 0 ,99 1 ,00 1 ,00 1 ,00 2 karanahujax_Model2 0 ,89 0 ,89 0 ,96 0 ,91 3 karanahujax_Model1 0 ,82 0 ,90 1 ,00 0 ,91 4 Raja 0 ,82 0 ,95 0 ,83 0 ,86 5 irisom_10_10 0 ,79 0 ,82 0 ,88 0 ,83 6 FICO_matcher_V1 0 ,77 0 ,78 0 ,92 0 ,82 7 irisom_5_5 0 ,77 0 ,75 0 ,90 0 ,81 8 otsedom 0 ,78 0 ,80 0 ,78 0 ,79 9 ccpsiarb_17 0 ,75 0 ,72 0 ,77 0 ,75 10 ccpsiarb_2 0 ,74 0 ,72 0 ,75 0 ,74 11 ccpsiarb_42 0 ,73 0 ,72 0 ,72 0 ,73 12 FICO_matcher_V2 0 ,61 0 ,65 0 ,75 0 ,67 Table 3

Detailed results in terms of RR, AUC and methods computing time per matching operation.

Algorithm All vs ALL GS4 vs GS4 IP5 vs IP5

RR AUC Global Time (s) RR AUC Global Time (s) RR AUC Global Time (s) ccpsiarb_17 0 ,68 0 ,83 0 ,75 57 ,64 0 ,63 0 ,81 0 ,72 61 ,43 0 ,70 0 ,85 0 ,77 54 ,30 ccpsiarb_2 0 ,65 0 ,82 0 ,74 25927 0 ,63 0 ,81 0 ,72 26538 0 ,63 0 ,86 0 ,75 25284 ccpsiarb_42 0 ,65 0 ,81 0 ,73 28620 0 ,63 0 ,81 0 ,72 28927 0 ,63 0 ,81 0 ,72 28339 FICO_matcher_V1 0 ,73 0 ,80 0 ,77 1 ,00 0 ,67 0 ,89 0 ,78 1 ,00 0 ,87 0 ,98 0 ,92 1 ,01 FICO_matcher_V2 0 ,48 0 ,73 0 ,61 0 ,57 0 ,50 0 ,79 0 ,65 0 ,57 0 ,57 0 ,93 0 ,75 0 ,57 irisom_10_10 0 ,80 0 ,78 0 ,79 3 ,48 0 ,77 0 ,88 0 ,82 3 ,45 0 ,83 0 ,92 0 ,88 3 ,51 Irisom_5_5 0 ,75 0 ,79 0 ,77 3 ,03 0 ,63 0 ,88 0 ,75 3 ,00 0 ,87 0 ,93 0 ,90 3 ,05 karanahujax_Model1 0 ,88 0 ,76 0 ,82 5 ,68 0 ,83 0 ,97 0 ,90 5 ,72 1 ,00 1 ,00 1 ,00 5 ,63 karanahujax_Model2 0 ,92 0 ,86 0 ,89 4 ,65 0 ,83 0 ,95 0 ,89 4 ,62 0 ,93 0 ,98 0 ,96 4 ,69 otsedom 0 ,63 0 ,93 0 ,78 41 ,74 0 ,67 0 ,94 0 ,80 42 ,37 0 ,63 0 ,92 0 ,78 41 ,10 tiger_miche 1 ,00 0 ,99 0 ,99 1 ,71 1 ,00 1 ,00 1 ,00 1 ,72 1 ,00 1 ,00 1 ,00 1 ,71 Raja 0 ,82 0 ,81 0 ,82 12 ,91 0 ,93 0 ,96 0 ,95 12 ,78 0 ,77 0 ,89 0 ,83 12 ,80

Table 3. No dramatic differences appear among intra- and inter-device settings regarding the matching time, therefore we will consider the latter for the following discussion. It is interesting to observe how the best method

tiger_miche

also achieves the best resultin termsof time requiredby the singlematching operation. Only

FICO_matcher_V1

does better, andeven more

FICO_matcher_V2

.However,thelatteroneprovidesmuchlower recognition accuracy. Onthe other extremewe findthe methods

otsedom

andthegroupof

ccpsiarb

variants,withamaximum ofabout286 seconds,i.e., about5minutes.This seemsto testify that,overall,thepresentedMLtechniquesarenotsuitedforareal timeoperationalsetting,evenifthisparameterwasnotevaluated forthecompetition.

6. Conclusions

Mobile devices are going to play a relant role in biometric recognition too. In this paper we discussed in detail the results of MICHE-II contest, the follow-up of MICHE-II. They were the firstinternationalcontests aimingatdemonstratingthefeasibility ofiris/ocularrecognition usingdataacquiredfromdifferenttypes ofmobile devices.After briefly summarizingthecompetition set-ting and the participating algorithms, the paper compared their performance inthe different experimentscarried out on

MICHE-IIdataset.Thelevelofperformance achievedbythemethods par-ticipating in MICHE-I were not comparable to those achievedby otherinvestigation performedsofaron irisrecognition.However, thebestmethodsparticipatinginMICHE-II,focusingonfeature ex-tractionandrecognitiononly,achieveextremelypromisingresults. Themostinterestingaspectisthatimageswereacquiredin uncon-trolledconditionsandin visiblelight, that are widelyrecognized asextremely adverse conditions.Ontheother hand,mostmobile devicesare equipped withhigh-resolution RGB cameras, andnot withNear Infrared(NIR) sensors that wouldallow better results. MICHE-II Evaluation Challenge aimed at testing the feasibility of usingstate-of-the-art methods foriris recognitionon mobile de-vices. The dataset provided for the competition contains images fromindoor/outdoor,frontal/rearcameraandacquiredbythe par-ticipantsontheirownbymultipledevicesandwithoutany super-vision.Thiscreatedtheopportunitiestotrythesubmittedmethods onverychallengingimages,inordertoanswerthequestion:“isit feasibletorecognizehumanirisesfrommobilewithsufficientlevel ofaccuracy?”.Theresultsachievedareencouragingandhelp iden-tifyinginteresting research linesto follow.As amatter offact, in thebestcases,theyarecomparabletostateoftheartiniris recog-nitioninmorecontrolledconditions.The aimofthe workwasto assessthepresentstate of theartinthis specificfield and high-lightexistinglimitations.However,thesameresultssuggestthatit

(8)

isworthfurtherexploring possibleimprovementsoftheavailable techniques.

Acknowledgments

ThethirdauthoracknowledgesthesupportgivenbyFCTproject UID/EEA/50008/2013.

References

[1] A. Abate , S. Barra , L. Gallo , F. Narducci , SKIPSOM: skewness & kurtosis of iris pixels in self organizing maps for iris recognition on mobile devices, in: Inter- national Conference on Pattern Recognition, ICPR 2016, 2016, pp. 1–5 . [2] N. Aginako-Bengoa , J. Martínez-Otzerta , I. Rodriguez , E. Lazkano , B. Sierra , Ma-

chine learning approach to dissimilarity computation: Iris matching, in: Inter- national Conference on Pattern Recognition, ICPR 2016, 2016, pp. 1–5 . [3] N. Aginako-Bengoa, J. Martínez-Otzerta, I. Rodriguez, E. Lazkano, B. Sierra, Iris

matching by means of machine learning paradigms: a new approach to dis- similarity computation, Pattern Recognit. Lett. Same Volume, TBA.

[4] N. Aginako-Bengoa , J. Martínez-Otzerta , B. Sierra , M. Castrillón-Santana , J. Lorenzo-Navarro , Local descriptors fusion for mobile iris verification, in: In- ternational Conference on Pattern Recognition, ICPR 2016, 2016, pp. 1–5 . [5] N. Aginako-Bengoa, J. Martínez-Otzerta, B. Sierra, M. Castrillón-Santana, J.

Lorenzo-Navarro, Periocular and iris local descriptors for identity verification in mobile applications, Pattern Recognit. Lett. Same Volume, TBA.

[6] N. Ahmed , S. Cvetkovic , E. Siddiqi , A. Nikiforov , I. Nikiforov , Using fusion of iris code and periocular biometric for matching visible spectrum iris images cap- tured by smart phone cameras, in: International Conference on Pattern Recog- nition, ICPR 2016, 2016, pp. 1–5 .

[7] N. Ahmed, S. Cvetkovic, E. Siddiqi, A. Nikiforov, I. Nikiforov, Combining iris and periocular biometric for matching visible spectrum eye images, Pattern Recog- nit. Lett. Same Volume, TBA.

[8] K. Ahuja , R. Islam , F. Barbhuiya , K. Dey , A preliminary study of cnns for iris and periocular verification in the visible spectrum, in: International Conference on Pattern Recognition, ICPR 2016, 2016, pp. 1–6 .

[9] K. Ahuja, R. Islam, F. Barbhuiya, K. Dey, Cnns for ocular smartphone-based bio- metrics, Pattern Recognit. Lett. Same Volume TBA.

[10] R. Arandjelovic , A. Zisserman , Three things everyone should know to improve object retrieval, in: Conference on Computer Vision and Pattern Recognition, CVPR 2012, IEEE, 2012, pp. 2911–2918 .

[11] J.G. Daugman , High confidence visual recognition of persons by a test of sta- tistical independence, IEEE Trans. Pattern Anal. Mach. Intell. 15 (11) (1993) 1148–1161 .

[12] M. De Marsico , M. Nappi , D. Riccio , Noisy iris recognition integrated scheme, Pattern Recognit. Lett. 33 (8) (2012) 1006–1011 .

[13] M. De Marsico , M. Nappi , D. Riccio , H. Wechsler , Mobile Iris Challenge Evalua- tion (MICHE)-I, biometric iris dataset and protocols, Pattern Recognit. Lett. 57 (2015) 17–23 .

[14] C. Galdi , J.-L. Dugelay , Fusing iris colour and texture information for fast iris recognition on mobile devices, in: International Conference on Pattern Recog- nition, ICPR 2016, 2016, pp. 1–5 .

[15] C. Galdi, J.-L. Dugelay, Fire: Fast iris recognition on mobile phones by combin- ing colour and texture features, Pattern Recognit. Lett. Same Volume TBA. [16] M. Haindl , M. Krupi ˇcka , Unsupervised detection of non-iris occlusions, Pattern

Recognit. Lett. 57 (2015) 60–65 .

[17] K.P. Hollingsworth , S.S. Darnell , P.E. Miller , D.L. Woodard , B.K. W. , P.J. Flynn , Human and machine performance on periocular biometrics under near-in- frared light and visible light., IEEE Trans. Inf. Forensics Secur. 7 (2) (2012) 588–601 .

[18] N. Liu , M. Zhang , H. Li , Z. Sun , T. Tan , Deepiris: learning pairwise filter bank for heterogeneous iris verification, Pattern Recognit. Lett. (2015) .

[19] F. Narducci, A. Abate, S. Barra, L. Gallo, Kurtosis and skewness at pixel level as input for som networks to iris recognition on mobile, Pattern Recogn. Lett. Same Volume TBA.

[20] P.J. Phillips , W.T. Scruggs , A.J. O’Toole , P.J. Flynn , K.W. Bowyer , C.L. Schott , M. Sharpe , FRVT 2006 and ICE 2006 large-scale experimental results, IEEE Trans. Pattern Anal. Mach. Intell. 32 (5) (2010) 831–846 .

[21] H. Proença , L.A. Alexandre , The NICE.I: Noisy Iris Challenge Evaluation-part I, in: Biometrics: Theory, Applications, and Systems, 20 07. BTAS 20 07. First IEEE International Conference on, IEEE, 2007, pp. 1–4 .

[22] H. Proença , L.A. Alexandre , Toward covert iris biometric recognition: experi- mental results from the NICE contests, IEEE Trans. Inf. Forensics Secur. 7 (2) (2012) 798–808 .

[23] H. Proença , S. Filipe , R. Santos , J. Oliveira , L.A. Alexandre , The UBIRIS.v2: a database of visible wavelength iris images captured on-the-move and at-a-dis- tance, IEEE Trans. Pattern Anal. Mach. Intell. 32 (8) (2010) 1529–1535 . [24] K. Raja, R. Raghavendra, S. Venkatesh, C. Busch, Multi-patch deep sparse fea-

tures for iris recognition in visible spectrum using collaborative subspace for robust verification, Pattern Recognit. Lett. Same Volume TBA.

[25] R.P. Wildes , Iris recognition: an emerging biometric technology, Proc. IEEE 85 (9) (1997) 1348–1363 .

Imagem

Fig. 1. Examples of segmentation of “good” images with the algorithm proposed by Haindl and Krupi  ˇcka for MICHE-I
Table 3 . No dramatic differences appear among intra- and inter- inter-device settings regarding the matching time, therefore we will consider the latter for the following discussion

Referências

Documentos relacionados

Ao servir o suco, fale sobre Deus derramando o Es- pírito Santo sobre os discípulos.. Dobre um pedaço de papel em forma de leque e

Levando em consideração esta afirmação de Tardif (op.cit.), acreditamos que, no processo de orientação para reescrita, o professor mobiliza estes saberes em sala de aula

Os profissionais devem conhecer os princípios da Norma Regulamentadora 32 - NR 32, que estabelece medidas de proteção à segurança e a saúde dos trabalhadores

11.1 A inscrição do candidato importará no conhecimento e na aceitação tácita das condições do Concurso Público, estabelecidas nesse edital em relação às quais não

[r]

Immune reconstitution inflammatory syndrome (IRIS) results from an exuberant immune response against residual antigens (paradoxical IRIS) or viable pathogens (unmasking IRIS) in

Tal sistema irá sendo tanto mais sofisticado, quanto mais a empresa for crescendo e as suas actividades se tornarem mais complexas, de tal forma que de um simples controlo dos

O método comparativo adotado na análise das áreas selecionadas foi utilizado com base em dois índices de otimização e de compacidade construídos para correlacionar as