The four span superstructure has been idealized as a mesh of longitudinal and transverse members and the loading has been applied as per the construction sequence. The analysis and design of beams has been carried out for the outer beam Transverse analysis of deck has been done using ASTRA software and excel For dispersion, 65mm thick wearing coat has been assumed but lode due to wearing coat has been assumed as 0.
Table 2. Under Deck Slab Load Figure 3. Shuttering Load Figure 3. Length of cable at mid in plan t mm Length of curved profile in plan u mm Length of reverse curve in plan v mm St. Table 3. Width 2. As per Dr.
The CL of jacks are taken to be mm from the CL of main girder. But from detailing point of view hoop reinforcement has been provided. Minimum Reinforcement in Vertical direction a In Web 0. British Standard, BS : , Steel, concrete and composite bridges — Part 4: Code of practice for design of concrete bridges 5. Related Papers Bridge design at wami river final year. By Aristid Ignas. Economic and durable design of composite bridges with integral abutments By Mike Haller , N.
Popa , Olli Kerokoski , and C. The environment of underwater structures, such as concrete piers, is more complex than that of the superstructure. It has the characteristics of rapid flow, turbid water quality, low visibility, strong corrosivity, and large sediment concentration on the surface of the structure. The quality of image acquisition is often disturbed by a series of uncertain environmental factors such as wind vibration, noise, and water impact.
To solve these problems and conduct monitoring in the complex environment like these, we sometimes use ROV remoted operated vehicle where microcamera attached to dive into the water and conduct the image acquisition.
We also apply related technology such as the image generation and raw data enhancement or restoration to make sure the data availability for the training in our model. To increase the diversity of datasets, some extra crack images taken by UAV from the existing buildings in Chengdu, China, were also added. The width of cracks varies from one pixel to pixels; the shape of which is hard to recognize at image level because of the noise robustness of the environment or some surface stains, rust, and corrosion.
It is not only the asphalt or cement concrete pavement but also includes the concrete walls of buildings or structures. All of these data images are saved and combined into a JPG format. Cracks in these images were taken at different distances depending on their sizes; each image has RGB pixels of. As for the ground truth of the image dataset for model training, the crack images were manually annotated with Photoshop, a common and useful software package.
The crack images were labeled as the following method: background pixels were marked by 0 and crack pixels were labeled by 1, allowing storage of the image information in binary format. Meanwhile, the model applied in this article is trained from data of the labeled crack images using backward propagation of errors, an algorithm used for supervised learning of artificial neural networks using stochastic gradient descent.
This algorithm can be conducted into separated procedures. The loss function is used to evaluate the difference between the predicted value and the real value of the model. The better the loss function is, the better the performance of the model is.
Different models usually use different loss functions. Dice loss function is a smooth Dice coefficient function, which is most commonly used in segmentation problems [ 36 ].
Boundary loss function is applied to tasks with highly unbalanced segments [ 37 ]. Lovasz softmax loss function is often used to solve the problem of submodular minimization [ 38 ]. To perfectly solve the problem of slow weight update of square loss function and assess the discrepancy between ground truth and predicted logits, cross-entropy loss function was selected as the loss function [ 39 ].
The generated loss can also be utilized to update the model parameters as well as to evaluate the performance of crack detection [ 40 ]. Cross-entropy loss function measures the performance of a classification model whose output is a probability value between 0 and 1.
Cross-entropy loss increases as the predicted probability diverges from the actual label [ 24 ]. For example, a probability of 0. A perfect model would have a log loss of 0. When talking about the cross-entropy loss function, its formula immediately comes to mind, and its formula can be shown in.
The sample imbalance usually exists in the dataset. Sample imbalance is a problem that many machine learning meets. If a sample of a certain class in a train set occupies most of the proportion, it is called a simple sample. Because of the large number of simple samples, the contribution to the loss of the entire train set will be very large, leading to the model not being well trained or fully trained.
It may be that the loss function got stuck in a worse local optimum. Therefore, the loss function cannot converge to a best result during the training process. To solve the imbalance problems, this paper introduced a weighting factor and an adjustable focusing parameter to regulate the cross-entropy function.
And this function formula can be shown in formula 3 : in this equation, will be set as 0. This article uses the Adam algorithm to optimize the model.
The Adam algorithm combines both the momentum algorithm and the RMS rate-monotonic scheduling prop algorithm [ 41 ]. It is also based on the gradient descent method, but the Adam algorithm has a certain range of parameter changes during each iteration.
The parameter will not change sharply due to the large gradient value calculated at a certain time, and the value of the parameter is relatively stable. According to the method proposed by Smith in [ 42 ], first set a very small learning rate, such as 0. Then, the optimal learning rate was determines according to it. The initial optimal learning rate here is set as 0.
Weight decaying is a form of regularization, which plays an important role in training, so it needs to be set appropriately. Weight decaying is defined as multiplying each weight in the gradient descent for each period by a factor. According to the experience, the weight decaying value which can be selected for testing is 0.
Larger weight decaying values are set for smaller datasets and model structures, while smaller values are set for larger datasets and deeper model structures. Considering the size of the dataset used in this study and the test results, the weight decaying value was set as 0. Meanwhile, about the momentum value ranging from 0.
So the initial learning rate of the Adam algorithm in this paper is. The Adam updated formula is given as the following:. Formulas 4 and 5 represent a moving average of the gradient and the square of the gradient so that each update is related to the historical value. Formulas 6 and 7 denote a correction for the larger initial moving average deviation which is called the bias correction. Formula 8 is the parameter update formula.
To save training time and generalize the training process, in the downsampling, part of the parameters in convolutional layers are initialized from pretrained VGG19 weights; for the upsampling part, the filters are initialized by truncated normal distribution with mean of zero and standard deviation of 0. Several rounds of adjustment for hyperparameters and training are carried out for each model to maximize the convergence of loss function to the global optimum. An average of 30 training epochs was used, with training rounds per epoch.
Different from the learning rate, the value of batch size does not affect the calculation of training time. The batch size is limited by the hardware storage, while the learning rate is not. The set of batch size needs to take into account the learning rate and GPU computing power. Generally speaking, the learning rate is directly proportional to the batch size.
Considering the hardware storage and GPU computing power of this computer, the value was set as 2. Models during the training process are autosaved after every epoch with monitoring of the minimum loss value. The accuracy of the model was then verified on the test set, and the model with the highest accuracy was saved as the final model. The output of the modified fully convolutional network is a probability map with pixel values that range from 0 to 1, in which a black color on the white background indicates that the pixel is more likely to be a crack pixel.
The threshold of the probability map is set to 0. It is common to use - fold cross-validation to evaluate the machine learning models; this paper use in this article. The total over sample images are randomly split into 3 groups numbering from 0 to 2. Each time, two groups are selected as the training set and the remaining one group as the validation set. As for the efficiency of the model proposed in this article, frame per second can be applied to evaluate it. To evaluate and assess the accuracy in the semantic segmentation task, several metrics are commonly used.
They are given in formulas 9 — 14 , including pixel accuracy PA , intersection over union IoU , mean intersection of union mIoU , precision, recall, and score [ 43 , 44 ]. For the crack detection task in this paper, the commonly used evaluation indexes are mIoU, precision , recall , and.
The crack pixels are defined as positive instances. According to combinations of labeled case and , The crack pixels are defined as positive instances. According to combinations of labeled case and predicted case, pixels are divided into four types: true positive TP , false positive FP , true negative TN , and false negative FN [ 40 , 45 , 46 ]. The confusion matrix is usually used to evaluate the model. The confusion matrix is a situation analysis table that summarizes the prediction results of classification model in machine learning.
The records in the dataset are summarized in the form of matrix according to the real category and the category judgment predicted by the classification model. The row of the matrix represents the real value, and the column of the matrix represents the predicted value. Table 1 shows a sample of the confusion matrix. Figure 3 gives the evaluation result of our model. The total class number including background is , and represents the number of pixels of class inferred to belong to class. So, , , and represent the pixel number of true positives TP , false positives FP , and false negatives FN , respectively.
PA is the ratio of the number of pixels with correct prediction category to the total number of pixels [ 47 ]. IoU is the result obtained by dividing the overlapped part of two regions by the set part of the two regions [ 48 ]. PA is the simplest evaluation metric, which calculates the ratio between the correctly classified pixels and the total number of pixels in an image.
IoU is a standard metric which has the function to assess the performance in semantic segmentation tasks. To achieve better results, a most intuitive method to show the process of classification and learning of the crack damage in an artificial machine is used in this article. That is, it can easily visualize the recognition process with the help of thermal map. In the deep learning process, the heat map is to extract the weights of all classes, and then, the convolution layer will keep forward to find the corresponding feature map to carry out the weighted sum.
Generally speaking, the heat map tells us which pixels the model uses to know which kind of image cracked or uncracked it belongs to. Different training models have requirements for the size of the output image. Figure 4 gives several samples of heat map in our model. It vividly shows the cracked and uncracked type. This paper compared the accuracy of the modified fully convolutional network proposed in this article and the U-net proposed by Liu et al. U-net was used as a baseline, while the modified fully convolutional network was evaluated with multiple modules such as the backbone network including the batch normalization layer and the convolutional layer, it also contains the spatial pyramid average pooling, spatial pyramid max pooling, deconvolutional layer, dilated convolutions, and a softmax layer.
For the modified fully convolutional network, focal loss is applied during the training with the final model.
The Adam optimizer is also used to update the weights. Each network includes one new module compared to the previous network and then is retrained to compare their accuracy and running time.
Evaluation metrics of PA, IoU, precision, recall, and score for different methods were calculated as listed in Table 2. Identification results of different methods are presented in Figures 5 a — 5 e. In order to make the recognition results more clearly, this paper refer to the case of.
As a quite practical drawing library, Matplotlib is also used to draw the curve of training and detection accuracy of two models with epochs Figure 6. Figure 7 depicts the change in loss function value during the training process between two different models.
According to the running results of FCN [ 9 ] and FCN with data preprocessing, the image preprocessing including graying, binarization, and threshold segmentation for the raw dataset has a certain improvement on the overall recognition accuracy of the model, but the promotion range is relatively small.
Compared with the conventional FCN, the modified FCN with innovative structure has significantly improved the recognition accuracy and recall. The modified FCN model of using image preprocessing technology has the highest index among all the above methods, the precision, recall, and F1 reach This research concentrates on the method applying convolutional network as well as the computer vision technology to identify or detect the concrete cracks in the architecture.
According to the characteristics of concrete cracks, it is essentially classified as a semantic segmentation problem in computer vision, and the modified fully convolutional network structure is used to build a deep learning model for crack detection. The performance of the concrete crack detection method based on the modified fully convolutional network structure was tested and compared with U-net proposed by Liu et al.
From the data and figure, it can be concluded that modified fully convolutional network will be more elegant than U-net or other conventional DCNN methods with more robustness, more effectiveness, and more accuracy. This paper also examines the fundamental parameters during the performance of the method; the modified fully convolutional network proposed by us is found to obtain high accuracy and high efficiency with enough dataset than the previous. Considering this method uses image processing technology, the camera ought to have the capability to obtain a clear field of view for cracks.
This method is only applicable for concrete cracks. Thus, its applicability to the inspection of other cracked engineering materials may be restricted. Besides, due to the limitations of acquisition equipment and acquisition environment, it is difficult to capture the microcracks or damages on some structural surfaces, and these microinformation often has very important guiding significance for obtaining the service state and failure mechanism of structures.
In the future, the application research of the deep learning algorithm model suitable for structural microcrack detection can be carried out.
On the other hand, the accurate crack detection in the proposed model is able to actuate the multidimensional data fusion between the image and other high-precision data such as radar, topology instrument, and laser scanner, compensate the defects of data types collected by a single sensor, and promote the quantitative detection and intelligent management of structural damage information.
Conceptualization was done by Meng Meng and Kun Zhu. Methodology was done by Meng Meng. Software was acquired by Kun Zhu. Formal analysis was performed by Meng Meng. Investigation was conducted by Kun Zhu. Resources were acquired by Keqin Chen. Data curation was performed Hang Qu. Writing—original draft preparation was done by Meng Meng. Writing—review and editing was done by Kun Zhu and Meng Meng.
Visualization was conducted by Meng Meng. Supervision was done by Keqin Chen. Project administration was done by Keqin Chen.
Funding acquisition was done by Keqin Chen. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Article of the Year Award: Outstanding research contributions of , as selected by our Chief Editors. Read the winning articles. Journal overview.
Special Issues. Academic Editor: Ricardo Perera. Received 25 Jun Revised 05 Aug Accepted 24 Oct Published 10 Nov If the user fails to authenticate to any of the available LDAP servers, they will fall back to normal core authentication. Documentation of the LDAP standard in general can be found here. These may be installed system-wide via package managers in the following way:.
Administrators can configure the ordered list of LDAP servers to try on the plugin configuration page. Each server in the list has several properties:.
This plugin is known to work against LDAP version 3. Using it with older versions of the protocol might work, but is not tested at this time. PyPI package : girder-oauth. This plugin allows users to log in using OAuth against a set of supported providers, rather than storing their credentials in the Girder instance. Specific instructions for each provider can be found below. By using OAuth, Girder users can avoid registering a new user in Girder, leaving it up to the OAuth provider to store their password and provide details of their identity.
The fact that a Girder user has logged in via an OAuth provider is stored in their user document instead of a password. OAuth users who need to authenticate with programmatic clients such as the girder-client python library should use API keys to do so. On the plugin configuration page, you must enter a Client ID and Client secret.
These must point back to your Girder instance. Users should then be able to log in with their Google account when they click the log in page and select the option to log in with Google. This plugin can also be extended to do more than just login behavior using the OAuth providers. Then, you can hook into the event of a user logging in via OAuth. You can hook in either before the Girder user login has occurred, or afterward.
In our case, we want to do it after the Girder user has been fetched or created, if this is the first time logging in with these OAuth credentials.
If event. PyPI package : girder-readme. PyPI package : girder-sentry. The Sentry plugin enables the use of Sentry to detect and report errors in Girder. PyPI package : girder-terms. The terms may be set with markdown-formatted text, and users will be required to re-accept the terms whenever the content changes. Logged-in users have their acceptances stored and remembered permanently, while anonymous users have their acceptances stored only on the local browser.
PyPI package : girder-thumbnails. PyPI package : girder-user-quota. PyPI package : girder-virtual-folders. This plugin should be enabled if you want to use the Girder worker distributed processing engine to execute batch jobs initiated by the server. This is useful for deploying service architectures that involve both data management and scalable offline processing.
This plugin provides utilities for sending generic tasks to worker nodes for execution. The worker itself uses celery to manage the distribution of tasks, and builds in some useful Girder integrations on top of celery. Girder stable.
Docs » Plugins Edit on GitHub. To authorize an upload on behalf of your user: Navigate into any folder to which you have write access. From the Folder actions dropdown menu on the right, choose Authorize upload here. You will be taken to a page that allows generation of a secure, single-use URL.
0コメント