Seeing Through the Rain (STRAIN): Vision Task Challenges in Real-world Rain Scenes

News!

    April 6 2023:The test phase begins. The test dataset for Tracks 1 & 2 has been released!

     April 7 2023:The test dataset for Tracks 3 & 4 has been released!

     Apr. 20, 2023:The notification of winners is now released.

       The valid submission only if the participants have submitted their paper and results could be considered as the winner. We will invite the winners of the competition (top three) to share their winning solutions at ICME and award certificates to the top three.


Background

Degraded multimedia data, especially real rain, poses serious challenges to scene understanding. Although existing computer vision technologies have achieved impressive results under unreal synthetic conditions, especially for those with simple degradation factors, they often lose their ability on real-world rain scene images, in which the background content has suffered various complex degradations. In outdoor applications such as automatic driving and monitoring systems, in particular, the degradation factors on rainy days involve rain intensity, occlusion, blur, droplets, reflection, wiper, and their blending effects, commonly confusing scene contents and causing misunderstandings. Consequently, these adverse effects often degrade performances of various vision tasks, such as scene understanding, object detection, and identification. There is an urgent need to deliver efforts to seek the optimized solution to meeting the actual needs of the people's livelihood.


Challenge

   (a)Droplet                     (b)Reflection                 (c)Before wiper            (d)After wiper

Shading Effect of Droplet (a):Droplet, the most common phenomenon on rainy days, will prevent light from passing straight through the object, and when it falls on the windshield, the content distortion caused by the reflectance and refraction becomes more pronounced. In Fig. (a), areas covered with rain droplet has significant misidentification, the green "vegetation'' is identified as the grey "building'', while the blue "sky'' is misidentified as the grey "building''.

Road Reflection (b):The rain droplets accumulated on the road commonly form a mirror reflection effect, confusing the segmentation model with the fake and indistinguishable objective boundary. As shown in Fig. (b), the shadow of cars is incorrectly identified.

Blurring by Windshield Wiper (c):The wiper eliminates the visual occlusion caused by the rain accumulation. However, for the real rainy days, the wiper operation could evidently promote the visual clarity of human observed views, but the unexpected change of scene contents is unfriendly to the segmentation model for the real-time application. As is shown in the yellow box in Fig. (c), the result is not consistent with the human's view.


Timeline

Data

Event

February 7, 2023

Release

April 10, 2023

Paper & Model Submission Deadline

April 20, 2023

Notification

April 30, 2023

Camera-Ready Regular Paper Submission

Grand Challenge papers have the same camera-ready deadline as regular papers, subject to the final release time of ICME. Please refer to the important dates for details.


Rainy WCity Datasets

Introduce

The Rainy WCity dataset consists of 300 rainy images with handcrafted annotations. We have observed that there are three typical degradation factors in these images that affect the quality of the image: raindrop, wiper, and reflection. Here we split the Rainy WCity with 600 images for training and 100 for testing, and each segment divided into three main categories according to the three degradation factors. The size of each image is 1,920×1,080. And there is no pixel-by-pixel paired ground-truth reference images, since our dataset is taken in the real rain scene. Note that the training data is composed of 400 images without corresponding annotations and 200 images with annotations.

Rainy WCity covers various rain patterns and their bring-in negative visual effects, covering raindrop, wiper, reflection, refraction, shadow, windshield-blurring, etc. Our dataset distinguishes others by featuring the following elements: 1) Diverse Rain Patterns: including light, moderate, and heavy rain scenarios; 2) Diverse degradation factors: besides the typical rain occlusion, our dataset considers numerous adverse effects of degradation, commonly occurred on rainy days but overlooked in other real rainy datasets, e.g., the raindrop interference, windshield-blurring, and road reflection.

In modern digital cameras, the output image we get from the camera is not the raw image, but rather the image obtained by passing the raw sensor data through the image signal processing (ISP) pipeline. In practice, if the ISP pipeline does not perform a denoising step, the processed sensor noise would deteriorate the output image by introducing non-Gaussian noise. Therefore, in Tracks 3 & 4, we processed the image in order to simulate the raw degradation scene.Specifically, we set up a mixture of random probability noise, adopting JPEG compression with different quality factors, and generating processed camera sensor noise via reverse-forward camera image signal processing (ISP) pipeline model and RAW image noise model. we perform randomly shuffled degradations to synthesize raw degradation images.This challenge aims at addressing the visibility problem of vision tasks under adverse weather scenarios (e.g.,fog, rain, low-illumination) for sharing ideas and discussions on current trends, issues, and future directions. The topics include but are not limited to:


Topic

This challenge aims at addressing the visibility problem of vision tasks under adverse weather scenarios (e.g.,fog, rain, low-illumination) for sharing ideas and discussions on current trends, issues, and future directions. The topics include but are not limited to:

Semantic Segmentation under Real Rain Scene started

Object Detection under Real Rain Scene started

Semantic Segmentation under Simulated Raw Degradation Scene started

Object Detection under Simulated Raw Degradation Scene started


Download

Baidu Netdisk:Rainy WCity

Google Drive:Rainy WCity

Ask for Permission

Track 1: Semantic Segmentation under Real Rain Scene

Track 2: Object Detection under Real Rain Scene

Track 3: Semantic Segmentation under Simulated Raw Degradation Scene

Track 4: Object Detection under Simulated Raw Degradation Scene

If you would like to use our dataset, please download the release file. The download of Rainy WCity DATABASE RELEASE AGREEMENT is here:

      Rainy Wcity DATAEBASE RELEASE.docx


Evaluation Protocol

Semantic SegmentationmIoU

Object DetectionImAP

Real Rain SceneNIQESSEQPIQEPerceptual IndexBIoU

Simulated Real Degradation ScenePSNRSSIM

In the semantic segmentation field, metrics such as Mean Intersection over Union (mIoU) are commonly used to evalute segmentation effects.

In the object detection field, conventional metrics like Mean Average Precision (mAP). is used to measure detection effectiveness.

(Additional) In the real-world, since our real rain scene belongs to the field of image enhancement, other non-referenced subjective evaluation metrics are also available: e.g. Spatial-Spectral Entropy-based Quality (SSEQ), Natural Image Quality Evaluator (NIQE), Perception-based Image Quality Evaluator (PIQE) 、Perception Index (PI) and Boundary IoU(BIoU).

(Additional) To measure the fidelity, we use the standard Peak Signal to Noise Ratio (PSNR) and the Structural Similarity (SSIM) index.


Paper Submission

Submission deadline

Length: Papers must be no longer than 6 pages, including all text, figures, and references.

Format: Grand Challenge papers have the same format as regular papers. See the example paper under the General Information section below. However, their review is single blind.

Submission: Submit the written component via CMT under the appropriate Grand Challenge track. Submit the data component, if any, directly to the Grand Challenge organizers as specified on the appropriate Grand Challenge site.

Review: Submissions of both written and data components will be reviewed directly by the Grand Challenge organizers. Accepted submissions (written component only) will be included in the USB Proceedings and the authors will be given the opportunity to present their work at ICME. “Winning” submissions will be announced by the Grand Challenge organizers at the conference.

Submissions may be accompanied by up to 20 MB of supplemental material following the same guidelines as regular and special session papers.

Presentation guarantee: As with accepted Regular and Special Session papers, accepted Grand Challenge papers must be registered by the author deadline and presented at the conference; otherwise they will not be included in IEEE Xplore.

A Grand Challenge paper is covered by a full-conference registration only.

Note:Submitted GC papers can properly have a missing results section, which should be completed after GC organizers have completed their evaluations.


Organizers

Results

Tracks 1 & 3: Semantic Segmentation under Real Rain Scene

Track

Team

road

sidewalk

building

wall

fence

pole

light

sign

vegetation

sky

person

rider

car

truck

bus

motorcycle

bicycle

mIoU

Track 1

代码敲不队

94.64

53.88

85.94

87.05

78.52

46.86

62.96

79.58

88.75

96.51

54.83

6.63

87.14

7.77

83.64

26.72

50.69

64.24

小队不署名

94.62

61.13

84.76

84.19

78.94

39.24

58.68

80.36

86.46

96.36

24.95

18.49

85.26

9.10

70.33

29.86

48.78

61.85

STRUCT Derain   Group

84.94

24.04

75.90

47.01

68.22

27.66

40.50

51.25

83.82

94.67

41.76

0.17

78.89

0.00

70.27

27.92

36.06

50.18

可乐

86.29

14.46

72.63

69.72

69.36

19.82

14.25

43.89

65.62

95.14

37.28

8.34

69.17

2.66

20.56

6.48

33.81

42.91

任意代码

87.08

38.99

65.55

69.92

28.08

16.52

6.30

35.55

73.83

85.83

28.99

3.04

64.24

4.57

18.07

6.60

26.44

38.80

狂风暴雨队

78.53

14.00

56.93

57.94

43.91

15.20

32.34

44.85

60.14

87.15

12.48

2.51

33.39

0.31

53.74

4.09

32.23

37.04

汉码冲冲冲

77.11

24.20

50.35

63.40

55.98

18.27

19.25

10.47

63.45

43.74

36.83

2.40

59.13

0.31

8.73

7.85

22.75

33.19

Smart Segmenters

78.83

0.37

59.77

19.07

54.02

5.04

3.25

64.93

74.18

80.92

27.59

0.00

69.22

0.00

9.69

0.00

0.14

32.18

Track 3

小队不署名

80.84

51.22

73.77

75.52

65.59

36.99

59.55

68.70

84.00

80.84

43.81

37.72

70.14

78.64

12.91

21.54

13.34

56.18

STRUCT Derain   Group

79.87

29.35

77.63

60.01

62.30

31.94

23.47

42.98

82.56

94.50

39.75

38.18

67.66

3.88

63.38

11.02

18.41

48.64

Smart Segmenters

64.49

0.09

39.88

15.29

34.88

5.53

0.06

37.37

56.55

70.15

20.26

3.07

55.27

19.98

3.15

0.00

0.00

25.06

Note:we will select the top three winners in Track 1 and 3 respectively.

Track 2: Object Detection under Real Rain Scene

Team

car

person

bicycle

bus

track

motorcycle

rider

mAP50/mAP50:95

深藏blue

68.8/48.7

42.6/18

60.8/39.6

35.4/25.8

56.6/42.3

29.3/11.5

41.2/14.1

47.8/28.6

Wanderers Team

68.5/46.2

37.5/15.3

50.9/27.6

37.5/24.9

55.2/39.8

27.5/7.97

35.5/13.8

44.7/25.1

Hope of star

67.6/46.2

33.4/14

48.8/21.4

35.5/24.2

54.2/38

30.4/10

38.7/13.6

44.1/23.9

旺仔小分队

66.6/44.2

29.6/11

39.3/16.5

33.3/20.8

49.7/34.7

31.9/9.38

38.2/12.4

41.2/21.3

知行合一组

69.9/46.7

36.6/15.1

48.2/30.2

34.5/27.4

60.8/41.1

28.5/9.69

24.3/8.64

43.3/25.6

梦之队

68.2/45.7

36/15.3

64.6/36.8

36.7/23.5

55.2/40.5

24.3/13

26.1/12.2

44.4/26.7

Ambition Group

64.7/42.5

38.9/17.6

69/37.3

36.8/25.1

54/37.1

30.6/10.5

26.7/7.7

45.8/25.4

基因重组

65.2/44.3

36.7/14.4

52.7/34.7

35.7/25.1

53.4/36.9

28.2/11.5

22.7/9.67

42.1/25.2


Note:we will select the top winners in Track 2.


Communication & QA

zhongx@whut.edu.cn